Anyone using TortoiseSVN with SSL and a relative path for the bugtraq:url? - https

(NOTE: All fake URL's below are shown as https:/ or http:/, with just a single slash.)
I've spent most of the day trying to get TortoiseSVN to work with my bug tracking system using a relative URL for the bugtraq:url parameter. I'm using SSL on the repository with a self-issued certificate. I'm using VisualSVN for the server and client pieces and Versioned.com's "Artifacts" for the bug tracking piece.
Seems like the only way I can make the embedded hyperlink (e.e. %BUGID%) work to fire up the browser is to use an absolute path for bugtraq:url, like this:
https:/my-server/svn/ProjectName/trunk/Bugs/artifacts.html#%BUGID%
I've tried every variation I can think of for relative paths (using "^/bla") without any luck. I'm getting a link showing up in the commit and log dialogs, but when I click on it it's a NOP. Here's what the TortoiseSVN docs say:
==
You can also use relative URLs instead of absolute ones. This is useful when your issue tracker is on the same domain/server as your source repository. In case the domain name ever changes, you don't have to adjust the bugtraq:url property. There are two ways to specify a relative URL:
If it begins with the string ^/ it is assumed to be relative to the repository root. For example, ^/../?do=details&id=%BUGID% will resolve to http:/tortoisesvn.net/?do=details&id=%BUGID% if your repository is located on http:/tortoisesvn.net/svn/trunk/.
A URL beginning with the string / is assumed to be relative to the server's hostname. For example /?do=details&id=%BUGID% will resolve to http:/tortoisesvn.net/?do=details&id=%BUGID% if your repository is located anywhere on http:/tortoisesvn.net.
==
I'm beginning to think that this is a limitation of either: A) using HTTPS on my VisualSVN Server; or B) trying to access a project on the same box as the repository but that's stored separate from the repository.

Related

MAMP/WAMP - Which process is yours to alternate between your online websites (URLs, https, ssl) and local developpment (URLs, https,ssl)

how is it adviced to alternate between online and local development, since you want to modify your websites on local.
Do you systematically change all URLs (by search/replace) in your project code to fit local URL type and sometimes create personal SSL certificate for https, or do you use another solution like localhost aliases, rewrite rules, or online developpement tools?
What could be an automatic solution in order to avoid this fastidious modifications like search/replace sometimes looking quite primitive and time costing since I develop during the few hours left after my main work.
What are the operation modes to facilitate developpment,
Have a nice day,
for all the biginners, here's the thing.
I've created a config.php file which contains constants: one config file for the local project folder and one for the online server folder.
Inside this config file, I've create a constant (constant are then available everywhere in the project) to define the main URL of the project. e.g.:
define('CST_MAIN_URL',http://www.myproject.com); // for the online config.php file
define('CST_MAIN_URL',http://localhost:8888); // for the local config.php file
Thus, each header or redirection can work with that constant, like:
header('location:' . CST_MAIN_URL . 'index.php');
Then, things must have to do with RewriteEngine in your htaccess file, for instance whenever you must modify the behavior of MAMP/WAMP if an interrogation point or a slash provokes you with its malicious resistance. But, unfortunately RegEx expression must be understood as a basic level for mastering those url rewritings.
Hope it'll helps.

how to enable remote webpage access nwjs's node environment

I was working on a project which need to figure out whether a webpage is running in nwjs and then do some operation based on it.
Now the page is running on a remote server and I use window.location to jump to it from my webapp. When jump to a outside location, the webpage cannot get nwjs node environment even it is opened in nwjs.
I use node-remote to allow access from all urls in the origin webapp's package.json, it is like this:
"single-instance": true,
"node-remote": ["*://*"],
"chromium-args": "--args --touch-events=enabled --disable-web-security --allow-file-access-from-files"
Do I miss something?
The special pattern matches any URL that starts with a permitted scheme.
so since you want to match any website then is best to do this
"node-remote": ["<all_urls>"]
for chromium arguments that are valid for nw.js you can try this site

How to name api routing on a webhost sub folder

I have always thought the api controllers where not found by physical paths. The reason I ask is I have a website example.com I created a folder example.com/testing and uploaded my project to there. When I ran it I got errors saying that none of the apiControllers could be found. So I changed /api/apiCustomers to /testing/api/apiCustomers. It then worked, well not the actual posting of any new records. It did locate and retrieve all the records from the database though. But it doesn't seem like that is what I would actually need to do? I have a domain with WinHost and the default publish folder is example.com/myApp
AM I looking at this the wrong way?
To handle request where you do not know the root path, you can use (as in ASP.NET) the ~-character like this:
~/api/apiCustomers
~ will then be replaced by the root (i.e. /api/apiCustomers for prod and /testing/api/apiCustomers for your test environment)

Download build drop from hosted Team Foundation Service

Using the hosted Team Foundation Service at tfs.visualstudio.com, one has the option in a Build Definition to "Copy build output to the server" which creates a zip of the drop folder that can be downloaded over https via team web access. I really need to download this drop automatically, so I can chain input to the next stage in my build pipeline.
Unfortunately, the drop URL is not obvious, but can be created using the TfsDropDownloader.
TL;DR - I can't get the TfsDropDownloader to work, I'm hoping someone else has used this tool or a similar method to succesfully download a drop from https://tfs.visualstudio.com
Using the command line TfsDropDownloader.exe I can do this:
TfsDropDownloader.exe /c:"https://MYPROJECTNAME.visualstudio.com/DefaultCollection" /t:"ProjectName" /b:"BuildDefinitionName" /u:username /p:password
...and get an empty zip file with the correct build label name of the last successful build e.g. BuildDefinitionName_20130611.1.zip
Running the source code in the debugger, this is because the URL that is generated for downloading:
https://tflonline.visualstudio.com/DefaultCollection/_apis/resources/containers/804/drop/BuildDefinitionName_20130611.1.zip
..returns a content type of application/json, which is unsupported. This exception is swallowed by the application, but not before the empty zip file is created.
Is it possible the REST API on Team Foundation Service has changed in some way so the generated URL is no longer correct?
Note that I am using the "alternate credentials" defined on my Team Foundation Service account (i.e. not my live ID) - using anything else gets me TF30063: not authorized.
I got it working by using alternate credentials, but I also had to access the REST API via a different path.
The current TfsDropDownloader builds a URL that looks like this:
https://project.visualstudio.com/DefaultCollection/_apis/resources/containers/804/drop/BuildDefinitionName_20130611.1.zip
This returns empty JSON whenever I try to use it. I'm definitely authenticated, because if I tweak the URL to:
https://project.visualstudio.com/DefaultCollection/_apis/resources/containers/804/drop
I get a nice JSON listing of every single file in the drop, but no zip.
From spying on the SSL traffic to https://tfs.visualstudio.com with Fiddler I saw that clicking the "Download drop as zip" link I can see that there is another endpoint at:
https://project.visualstudio.com/DefaultCollection/ProjectName/_api/_build/ItemContent?buildUri=vstfs%3a%2f%2f%2fBuild%2fBuild%2f639&path=%2Fdrop
...which does give you a zip. The "vstfs%3a%2f%2f%2fBuild%2fBuild%2f639" portion is the URL encoded BuildUri.
So I've changed my version of GetServerPath in the TfsDropDownloader source to do this:
private static string GetServerPath(TfsConnection collection, IBuildDetail buildDetail)
{
var downloadPath = string.Format("{0}{1}/_api/_build/ItemContent?buildUri={2}&path=%2Fdrop",
collection.Uri,
HttpUtility.UrlPathEncode(buildDetail.TeamProject),
HttpUtility.UrlEncode(buildDetail.Uri.ToString()));
return downloadPath;
}
This works for me for the time being. Hopefully this helps someone else with the same problem!

Launching a registered mime helper application

I used to be able to launch a locally installed helper application by registering a given mime-type in the Windows registry. This enabled me to allow users to be able to click once on a link to the current install of our internal browser application. This worked fine in Internet Explorer 5 (most of the time) and Firefox but now does not work in Internet Explorer 7.
The filename passed to my shell/open/command is not the full physical path to the downloaded install package. The path parameter I am handed by IE is
"C:\Document and Settings\chq-tomc\Local Settings\Temporary Internet Files\
EIPortal_DEV_2_0_5_4[1].expd"
This unfortunately does not resolve to the physical file when calling FileExists() or when attempting to create a TFileStream object.
The physical path is missing the Internet Explorer hidden caching sub-directory for Temporary Internet Files of "Content.IE5\ALBKHO3Q" whose absolute path would be expressed as
"C:\Document and Settings\chq-tomc\Local Settings\Temporary Internet Files\
Content.IE5\ALBKHO3Q\EIPortal_DEV_2_0_5_4[1].expd"
Yes, the sub-directories are randomly generated by IE and that should not be a concern so long as IE passes the full path to my helper application, which it unfortunately is not doing.
Installation of the mime helper application is not a concern. It is installed/updated by a global login script for all 10,000+ users worldwide. The mime helper is only invoked when the user clicks on an internal web page with a link to an installation of our Desktop browser application. That install is served back with a mime-type of "application/x-expeditors". The registration of the ".expd" / "application/x-expeditors" mime-type looks like this.
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\.expd]
#="ExpeditorsInstaller"
"Content Type"="application/x-expeditors"
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\ExpeditorsInstaller]
"EditFlags"=hex:00,00,01,00
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\ExpeditorsInstaller\shell]
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\ExpeditorsInstaller\shell\open]
#=""
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\ExpeditorsInstaller\shell\open\command]
#="\"C:\\projects\\desktop2\\WebInstaller\\WebInstaller.exe\" \"%1\""
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\MIME\Database\Content Type\application/x-expeditors]
"Extension"=".expd"
I had considered enumerating all of a user's IE cache entries but I would be concerned with how long it may take to examine them all or that I may end up finding an older cache entry before the current entry I am looking for. However, the bracketed filename suffix "[n]" may be the unique key.
I have tried wininet method GetUrlCacheEntryInfo but that requires the URL, not the virtual path handed over by IE.
My hope is that there is a Shell function that given a virtual path will hand back the physical path.
I believe the sub-directories created by IE are randomly generated, so you won't be able guarantee that it will be named the same every time, and the problem I see with the registry method is that it only works when the file is still in the cache...emptying the cache would purge the file requiring yet another installation.
Would it not be better to install this helper into application data?
I'm not sure about this but perhaps this may lead you in the right direction: try using URL cache functions from the wininet DLL: FindFirstUrlCacheEntry, FindNextUrlCacheEntry, FindCloseUrlCache for enumeration and when you locate an entry whose local file name matches the given path maybe you can use RetrieveUrlCacheEntryFile to retrieve the file.
I am using a similar system with the X-Appl browser to display WAML web applications and it works perfectly. Maybe you should have a look at how they managed to do it.
It looks like iexplore is passing the shell namespace "name" of the file rather than the filesystem name.
I dont think there is a documented way to be passed a shell item id on the command line - explorer does it to itself, but there are marshaling considerations as shell item ids are (pointers to) binary data structures that are only valid in a single process.
What I might try doing is:
1. Call SHGetDesktopFolder which will return the root IShellFolder object of the shell namespace.
2. Call the IShellFolder::ParseDisplayName to turn the name you are given back into a shell item id list.
3. Try the IShellFolder::GetDisplayNameOF with the SHGDN_FORPARSING flag - which, frankly, feels like w'eve just gone in a complete circle and are back where we started. Because I think its this API thats ultimately responsible for returning the "wrong" filesystem relative path.
Some follow-up to close out this question.
Turned out the real issue was how I was creating the file handle using TFileStream. I changed to open with fmOpenRead or fmShareDenyWrite which solved what turned out to be a file locking issue.
srcFile := TFileStream.Create(physicalFilename, fmOpenRead or fmShareDenyWrite);

Resources