I'm trying to get my head around the workings of an xAPI package authored in Rise which has been supplied to me so I can build a test PoC app.
I can see the functions built into the index.html page, and that things like progress and quiz scores are genrated, but where do I find the end-point for a LRS within the package?
I have incorporated the package into a test app I built, but rather than generate and send statements myself I would like use what comes as part of the package.
If I import and play the package in SCORM Cloud, I get generated statements returned.
The only thing I can see is an entry in the tincan.js file, this.recordStores=[] other than that I'm unsure where to go next, any suggestions?
Generally this kind of package implements a set of guidelines that were released with the 0.9 version of the specification (at the time named the Tin Can API and then later changed to xAPI). Those guidelines provide for a packaging and launch mechanism which is what Rise has implemented. The launch mechanism indicates that the endpoint and authentication credentials will be passed on the query string to the launched content where it can retrieve them. The TinCanJS library used by Rise implements functionality to digest the query string and set up objects, those you find in this.recordStores to communicate with the xAPI LRS identified in the query string parameters.
You have two primary options,
Get the query string parameters directly from the launch URL and
process it yourself, potentially using the same global library objects
(TinCan.LRS) already available to get an LRS object that you can
then interact with as you see fit,
Leverage the object already created for you via the this.recordStores list that is already prepared by the package itself
There are pros/cons to both methods and they largely depend on your familiarity with JavaScript and how flexible you need/want to be.
Related
I've recently updated my serverless project, and I've found that many things have changed in the last few updates.
https://serverless.com/
I don't fully understand whats the correct way to have multiple lambda functions and api gateway endpoints related to the same project. With the old serverless I have every lambda and endpoint as a completely seperate function, this worked pretty well for me.
I can't seem to do this anymore, if I try my second lambda function overrides my first, presumably because my "service name" for both is the same. My service name is the same because I want both rest endpoints in the same API in API Gateway. Since serverless creates the API name based on the service name.
So then I tried to add both functions to the same "Service". this worked for the most part, except that now I need to include my custom role statement for all my functions into the same role (because this one role is now being linked to all my functions). Effectively giving more permissions to each individual function than it should have. The other issue is that all my handler files for the different functions are being put into each functions deployment bundle.
So basically, I'm not sure what is the correct approach to have multiple functions that relate to the same project but are separate in functionality. It used to make sense, now doesn't.
If anybody can give me some pointers please
Thanks
I understand your frustration. I had the same feeling until I looked deeper into the new version and formed a better understanding. One thing to note though, is the new version is not completely finished yet. So if something is completely missing, you can file an issue and have it prioritized before 1.0 is out.
You are supposed to define multiple functions under the same service under the functions: section of serverless.yml. To package these functions individually (exclude code for other functions) you will have to set individually: true under package: section. You can then use include and exclude options at the root level and at the function level as well. There's an upcoming change that will let you use glob syntax in your include and exclude options (example **/*-fn.js). You can find more about packaging here https://www.serverless.com/framework/docs/providers/aws/guide/deploying.
Not sure how to use different roles for different functions under the same service.. How did you do it with 0.5?
I was trying to find a solution for individual iam roles per function as well. I couldn't find a way to do it, but while I was looking through the documentation I found the line: "Support for separate IAM Roles per function is coming soon." on this page, so at least we know they are working on it.
The "IAM Roles Per Function" plugin for Serverless allows you to do exactly what it says on the tin: specify roles for each function. You can still use the provider-level roles as well:
By default, function level iamRoleStatements override the provider level definition. It is also possible to inherit the provider level definition by specifying the option iamRoleStatementsInherit: true
EDIT: You can also apply a predefined AWS role at both the provider and function level.
I'm using the Check package to validate parameters passed to Meteor methods. And I'm using Audit argument checks to enforce this.
However, I've added another package, Meteor Tags and when I try to use methods from the Tags package, I get a server error "Exception while invoking method '/patterns/addTag' Error: Did not check() all arguments during call to '/patterns/addTag'".
I think I understand why this error happens - the method in the Tags package doesn't check its inputs, so Audit Argument Checks generates an error. But I can't find any way around this, apart from 1) don't enforce checking, or 2) hack the Tags package methods so they use check. Neither of these seems like a great option - checking server parameters is a good idea, and hacking a package is not very maintainable.
Does anybody know if there is any smart way to use 'Audit argument checks' with packages that provide new server methods? I have looked at the Check documents, and searched online, but I haven't found an answer.
I hope this question makes sense.
Using audit-argument-checks is like saying: "I want to be serious about the security of the methods in my app." It's global to all methods in your app's codebase, including the methods from your installed packages.
There is no way to specify which parts of the app get checked, as that would be the equivalent of saying: "I want to be serious about the security of the methods I've written, but I don't care about the security holes created by some pacakges" (which doesn't make a lot of sense).
Note to package authors
Check your method arguments. It's not hard, and it prevents this situation from happening. Frankly, a package without this basic security really shouldn't be installed in the first place.
What you should do
Unless you have a throwaway app, I wouldn't recommend removing audit-argument-checks. Instead I'd do the following (assuming the package really has something of value):
Open an issue on github and let the maintainer know what's up.
Fork the code, and add the required checks. Keep this version as a local package.
Submit a pull request for the changes.
If all goes well, your PR will be accepted and everyone can benefit from the change. In the worst case, you'll still have a local copy that you can use in your app.
My company runs a 4GL application internally. It's very old and no one really knows how to improve/develop for it since the developers are long gone.
I need to make a simple SOAP call to my Magento web store. There are tons of examples online in a multitude of languages, but I can't find a single 4GL (OpenEdege ABL) example.
I'm trying to set SKU's to Out of stock status.
Does anyone have a simple example that I can look at, or at least a starting point since there seems to be so little information on 4GL on the web.
Example of the call I need in PHP:
<?php
$proxy = new SoapClient('http://www.domain.com/api/soap/?wsdl');
$sessionId = $proxy->login('admin', 'password');
$proxy->call($sessionId, 'product_stock.update', array('sku123', array('qty'=>50, 'is_in_stock'=>1)));
For version 10.2B there's built in support for consuming web services in Progress ABL.
This is a basic tutorial of how to create a client for a SOAP-based web service in ABL. It's not best practices or in any way complete. Just a quick guide to get started.
1. Analyse the WSDL
There's a built in tool available via command line that lets you analyse a WSDL and create documentation about available services, datatypes, syntax etc. Invoke it on your wsdl like this:
proenv> bprowsdldoc yourwsdl-file c:\temp\docs
The wsdl can be local or remote. If its remote you specify the URL, if it's local you can specify just the local complete path. Documentation in html format will end up in c:\temp\docs. Open up index.html in that folder.
2. Create a basic client
In the index.html document there's a number of headings. Click the link under "Port types". In the Port Type document you will find some useful data.
Copy-and-paste the example in "Connection Details" into your Progress Editor. It should look something like this (names of services and procedures will be different - they are defined in the wsdl):
DEFINE VARIABLE hWebService AS HANDLE NO-UNDO.
DEFINE VARIABLE hYYY AS HANDLE NO-UNDO.
CREATE SERVER hWebService.
hWebService:CONNECT("-WSDL 'file_or_url_to_wsdl.wsdl'").
RUN XXX SET hYYY ON hWebService.
If you run this code your client is connected to the web service but it's still not doing anything.
Further down the same document there's a heading called "Operation (internal procedure) details". This is where the actual web service is invoked. It will look something like the code below. It actually show two ways of making the same call, one functional call and one procedural so choose whatever you prefer and insert it into your editor (I'm usually using the procedural for no real reason other than old habits):
DEFINE VARIABLE strXMLRequest AS CHARACTER NO-UNDO.
DEFINE VARIABLE ProcessXMLResult AS CHARACTER NO-UNDO.
FUNCTION ProcessXML RETURNS CHARACTER
(INPUT strXMLRequest AS CHARACTER)
IN hYYY.
/* Function invocation of ProcessXML operation. */
ProcessXMLResult = ProcessXML(strXMLRequest).
/* Procedure invocation of ProcessXML operation. */
RUN ProcessXML IN hYYY (INPUT strXMLRequest, OUTPUT ProcessXMLResult).
Now all you need to end your program is disconnecting and cleaning up. So insert:
hWebService:DISCONNECT().
DELETE OBJECT hWebService.
If you've followed all steps you should have a skeleton for invoking a web service. The only problem is that you need to handle the in- and out-data.
3. Handle the answer and the request
Depending on how the web service is built this can be easy (if you only input and output simple data like strings and numbers) or quite complicated (if you input and output entire xml-documents). The documentation you created in step one lists all datatypes (in the index.html document) but it doesn't offer any support in how you create any needed xml documents. There's specific Progress documentation available on how to work with xml...
The better approach is to take a look at the official documentation. There you will find everything above and more - how to handle errors etc.
Here is an overview of all 10.2B documentation and here is the PDF named Web Services.
Here is a link to a complete (but actually not so good) example in the Progress KnowledgeBase where a client and corresponding request/response xml is created and handled.
Look at these chapters:
6 - Creating an ABL Client from WSDL
7 - Connecting to Web Services from ABL
8 - Invoking Web Service Operations from ABL
That will basically take you through the entire process from start to beginning.
I'm developing an API Server in Go and the server (at the moment) handles all translations for clients. When an API client fetches particular data it also asks for the translations that are available for the given section.
Ideally I want to have the following folder structure:
/messages
/home.en
/home.fr
/home.sv
/news.en
/news.fr
/news.sv
Where news and home are distinct modules.
Now the question I have for Revel is is it possible to fetch ALL language strings for a given module and given locale? For example pull all home strings for en-US.
EDIT:
I would like the output (something I can return to the client) a key:value string of translations.
Any guidance would be appreciated.
It seems to me that revel uses messaged based translation (just like gettext does), so you need
the original string to get the translation. These strings are stored in Config objects,
which are themselves stored in messages of i18n.go, sorted by language.
As you can see, this mapping is not exported, so you can't access it. The best way
to fix this is to write a function for what you want (getting the config by supplying a language)
or exporting one of the existing functions and create a pull request for revel.
You may workaround this by copying the code of loadMessageFile or by forking your version
of revel and exporting loadMessageFile or parseMessagesFile. This also is a great opportunity
to create a pull request.
Note that the localizations are stored in a INI file format parsed by robfig/config,
so manually parsing is also an option (although not recommended).
I created an ASP.NET MVC4 Web API service (REST) with a single GET action. The action currently needs 11 input values, so rather than passing all of those values in the URL, I opted to encapsulate those values into a single class type and have it passed as Content-Body. When I test in Fiddler, I specify the verb as GET, and enter the JSON text in the "Request Body" input box. This works great!
The problem is when I attempt to perform Load Testing in Visual Studio 2010 Ultimate. I am able to specify the GET action and the JSON Content-Body just fine. But when I run the Load test, VS reports exceptions of type ProtocolViolationException (Cannot send a content-body with this verb-type) in the test results. The test executes in 1ms so I suspect the exceptions are causing the test to immediately abort. What can I do to avoid those exceptions? I'd prefer to not change my API to use URL arguments just to work-around the test tooling. If I should change the API for other reasons, let me know. Thanks!
I found it easier to put this answer rather than carry on the discussions.
Sending content with GET is not defined in RFC 2616 yet it has not been prohibited. So as far as the spec is concerned we are in a territory that we have to make our judgement.
GET is canonically used to get a resource. So you are retrieving this resource using this verb with the parameters you are sending. Since GET is both safe and idempotent, it is ideal for caching. Caching usually takes place based on the resource URI - and sometimes based on various headers. The point is cache implementations - AFAIK - would not use the GET content (and to be honest I have not seen any GET with content in real world). And it would not make sense to include the content in the key generation since it reduces the scalability of the caches.
If you have parameters to send, they must be in the URI since this is part of what defines that URI. As such, I strongly believe sending content with GET is wrong.
Even when you look at implementations such as OData, they put the criteria in the URI. I cannot imagine your (or any) applications requirements is beyond OData query requirements.