Support for many subdomains in manifest - outlook

Is there a way to use dynamic subdomains for the property in the manifest file, or something like wild cards? <AppDomain>https://*.somedomain.com</AppDomain> We have a large number of subdomains, 1-per customer.
Is there a good way to handle this?

Based on this document, it's impossible.
Instead of just typing, you can generate the manifest file programmatically based on the full list of subdomains. ex; create an endpoint returns XML file (generated one with full subdomain lists.)

Currently, this is not supported in the manifest. (check the link in Stefan's answer). This is a feature that we know is desired, but we do not have a timeline for if/when it would be shipped.
Suggestions/Voting for this feature can be done at: https://officespdev.uservoice.com/forums/224641-feature-requests-and-feedback/suggestions/13314972-support-wildcards-for-appdomains-in-office-add-in
However, wildcard subdomains on the source URL in the manifest are currently supported. For example, if the SourceLocation is contoso.com, then customerA.contoso.com would be allowed.

Related

How to best manage versions with API Blueprint format

How are people managing multiple API versions with API Blueprint?
It doesn't seem that the format supports version sections within a single file, so I'm left thinking that multiple files with indicators in the filename are the best option.
We want to leverage the tools to create a central mock-server and doc commons, and will need to handle evolving multiple versions of each API.
Managing multiple versions via branches seems inconvenient for us, so we render the entire document with multiple versions of APIs in one page. Our users need to be able to read both versions by just prepending v1 or v2 in front of the URL. So, we have a node app that handles the documentation requests and renders the doc via aglio node module.
The following is how we organize the docs.
Users can request /docs/en/spec.
The en part determines the language of the document as we support different languages.
Because the entire document is pretty huge we split it into files based on the Blueprint Group (the thing that starts with # Group GroupName)
When a request comes in, we first look if we have previously compiled the doc and have a cached version, so we don't recompile every time (it's pretty intensive work especially when the doc is large).
If we have no cached version, we read all *.md files in the docs/en directory.
Sort the filenames alphabetically, concat their contents, and pass to aglio which produces a nice html content. We cache this content into a file and later pipe it to the client on each request.
The UI provides the table of contents (side menu on the left) which has, for instance, the following format.
Auth
Projects
Project Users
...
Groups
Groups v2
Now each Group of APIs has a distinct URL which is prepended with /v1 by default. When we introduce a new version of a specific API, we create a new # Group Groups v2 which is prepended with /v2. This way our users can see and choose multiple versions of the APIs in one page.
The nice things about the aglio node module is that it provide multiple themes for the UI which provides a nice navigation so that our users don't feel the page is too overloaded. Among the themes you can choose either single-page UI or multi-page UI where on selection of the API the page with the corresponding API is loaded and the URL is changed.
Implementing this logic is very simple. Hope this helps.
There is another idea which we are considering right now but haven't started just yet. It is to avoid splitting APIs into different # Groups and instead modify the Jade template used by aglio and make sure it supports multiple versions out of the box.
It might be best to version the blueprint file in a versioning repository and treat different branches as different API versions. You can even have the blueprint in the same repo/branch as the API server implementation.
If you're versioning using GitHub, Apiary can connect to GitHub and you can setup different branches to be picked up by different documentations in Apiary.

FHIR publish tool

I use FHIR publish tool to create profile documentation. I organize my extensions in one profile, and extended resources (resources which use extensions) in separate application profiles. Extensions in a resource then have value like 'profileIdofextension#extensionName' in Profile column. But this does not make the extensions to be hyperlinked to the extension where they are defined (the profile containing all extensions). In order to make hyperlink work, I have to put 'prfoileOfExtensionHtmlFileName#extensionName' in the Profile column. Why does not the tool resolve profileId to profileHtmlFileName? In profile xml definition, I think I should refer to profileId but not the html file name.
The way the tool handles profiles was thrown together to prove something, and was never intended to be more. I'm going to rewrite that part in the next couple of months, and all of that will get overhauled completely. I'll announce on my blog (http://healthintersections.com.au) when it's done.

Populate file list with previously uploaded files

Using the jQuery wrapped version of Fineuploader v3.3.
Is it possible to populate the file list with files already in the upload folder?
I think "_addToList(id, name)" should do the trick, but I can't get it to work. Any ideas?
Seems that they are currently working on this feature:
https://github.com/Widen/fine-uploader/issues/784
So, this will be available soon.
This is not a behavior that Fine Uploader currently supports. Fine Uploader only displays files that users have submitted to the uploader since the current uploader instance was created. It doesn't try to be an all-in-one web application. You could probably add your own item to the list/UI via javascript. That probably wouldn't be terribly difficult, but seems like an odd thing to do.
If you'd like to discuss your specific use case more, please open up a feature request in the Github issue tracker.
Generally, client side code cannot add stored or hard-coded path based file names for use in any type of POST or upload operation. Obviously this is a security measure, you can imagine if a malicious web page could add to a generic POST operation some type of baked in file name. So from what I understand, only the user can specify path based file names, via a file browser for the session that it is included in. This applies to HTML/JavaScript/jQuery but am unsure if Flash/Silverlight based solutions would also be limited. I think a Java based uploader would be free of this. But you are just moving closer and closer to installed software.

how to simple rebuild all aliases (friendly urls) to make them based on pagetitles? (MODX Revolution)

I was creating a lot of documents and sub-documents on the documents-tree. But on Frontend Page there are their frendly url based on IDs. How to simple update that aliases and rewrite them to based on pagetitle? Iwas trying to put to columns
`alias`, `uri`
strings generated form page-title, but there was errors while flushing the modx cache. Do you know some add-on or SQL-query that simply do that job?
There is a switch in a System settings (I believe it is located somewhere in a FURL tab). This is almost mandatory setting of a MODx.

websites urls without file extension?

When I look at Amazon.com and I see their URL for pages, it does not have .htm, .html or .php at the end of the URL.
It is like:
http://www.amazon.com/books-used-books-textbooks/b/ref=topnav_storetab_b?ie=UTF8&node=283155
Why and how? What kind of extension is that?
Your browser doesn't care about the extension of the file, only the content type that the server reports. (Well, unless you use IE because at Microsoft they think they know more about what you're serving up than you do). If your server reports that the content being served up is Content-Type: text/html, then your browser is supposed to treat it like it's HTML no matter what the file name is.
Typically, it's implemented using a URL rewriting scheme of some description. The basic notion is that the web should be moving to addressing resources with proper URIs, not classic old URLs which leak implementation detail, and which are vulnerable to future changes as a result.
A thorough discussion of the topic can be found in Tim Berners-Lee's article Cool URIs Don't Change, which argues in favour of reducing the irrelevant cruft in URIs as a means of helping to avoid the problems that occur when implementations do change, and when resources do move to a different URL. The article itself contains good general advice on planning out a URI scheme, and is well worth a read.
More specifically than most of these answers:
Web content doesn't use the file extension to determine what kind of file is being served (unless you're Internet Explorer). Instead, they use the Content-type HTTP header, which is sent down the wire before the content of the image, HTML page, download, or whatever. For example:
Content-type: text/html
denotes that the page you are viewing should be interpreted as HTML, and
Content-type: image/png
denotes that the page is a PNG image.
Web servers often use the file extension if the file is served directly from disk to determine what Content-type to assign, but web applications can also generate pages with any Content-type they like in response to a request. No matter the filename's structure or extension, so long as the actual content of the page matches with the declared Content-type, the data renders as intended.
For websites that use Apache, they are probably using mod_rewrite that enables them to rewrite URLS (and make them more user and SEO friendly)
You can read more here http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html
and here http://www.sitepoint.com/article/apache-mod_rewrite-examples/
EDIT: There are rewriting modules for IIS as well.
Traditionally the file extension represents the file that is being served.
For example
http://someserver/somepath/image.jpg
Later that same approach was used to allow a script process the parameter
http://somerverser/somepath/script.php?param=1234&other=7890
In this case the file was a php script that process the "request" and presented a dinamically created file.
Nowadays, the applications are much more complex than that ( namely amazon that you metioned )
Then there is no a single script that handles the request ( but a much more complex app wit several files/methods/functions/object etc ) , and the url is more like the entry point for a web application ( it may have an script behind but that another thing ) so now web apps like amazon, and yes stackoverflow don't show an file in the URL but anything comming is processed by the app in the server side.
websites urls without file extension?
Here I questions represents the webapp and 322747 the parameter
I hope this little explanation helps you to understand better all the other answers.
Well how about a having an index.html file in the directory and then you type the path into the browser? I see that my Firefox and IE7 both put the trailing slash in automatically, I don't have to type it. This is more suited to people like me that do not think every single url on earth should invoke php, perl, cgi and 10,000 other applications just in order to sent a few kilobytes of data.
A lot of people are using an more "RESTful" type architecture... or at least, REST-looking URLs.
This site (StackOverflow) dosn't show a file extension... it's using ASP.NET MVC.
Depending on the settings of your server you can use (or not) any extension you want. You could even set extensions to be ".JamesRocks" but it won't be very helpful :)
Anyways just in case you're new to web programming all that gibberish on the end there are arguments to a GET operation, and not the page's extension.
A number of posts have mentioned this, and I'll weigh in. It absolutely is a URL rewriting system, and a number of platforms have ways to implement this.
I've worked for a few larger ecommerce sites, and it is now a very important part of the web presence, and offers a number of advantages.
I would recommend taking the technology you want to work with, and researching samples of the URL rewriting mechanism for that platform. For .NET, for example, there google 'asp.net url rewriting' or use an add-on framework like MVC, which does this functionality out of the box.
In Django (a web application framework for python), you design the URLs yourself, independent of any file name, or even any path on the server for that matter.
You just say something like "I want /news/<number>/ urls to be handled by this function"

Resources