We have just upgraded from Microsoft Dynamics CRM 4 to Microsoft Dynamics CRM 2011. Most of the upgrade has gone smoothly, however I have some custom code (written for CRM 4) which uses the CrmDiscoveryService at the URL "https:///MSCRMServices/2007/SPLA/CrmDiscoverService.asmx" which worked fine on our Dynamics CRM 4 server but not with out Dynamics CRM 2011 server.
Our Dynamics CRM 2011 server is set up On Premise, as an IFD deployment. On the actual Dynamics CRM 2011 server box I can navigate to "https://:444/MSCRMServices/2007/SPLA/CrmDiscoveryService.asmx" and I am directed to the correct web service, however if I try to access this from any other computer I get a infinitely looped redirect.
Using Fidler I can read what is being sent when I try to navigate to the CrmDiscoveryService URL and the response, before I am redirected is:
HTTP/1.1 302 Found
Cache-Control: private
Content-Length: 237
Content-Type: text/html; charset=utf-8
Location: https://<server>:444/MSCRMServices/2007/SPLA/CrmDiscoveryService.asmx
Server: Microsoft-IIS/7.5
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Tue, 06 Dec 2011 23:31:26 GMT
<html><head><title>Object moved</title></head><body>
<h2>Object moved to here.</h2>
</body></html>
I believe that Dynamics CRM is trying to redirect me to the actual page I have gone to, and thus it is looping into infinity.
I originally had this issue with the Discovery Service: http://social.microsoft.com/Forums/en-US/crmdeployment/thread/d92924d8-5982-4a11-ac66-602feb4542c8/?prof=required however I was able to correct this by allowing anonymous authentication to the folder the Discover Service was located in.
After some extensive searches I am yet to find anything on the Discovery Service infinite redirect issue I am now having.
Any help would be greatly appreciated.
So I've solved the problem, kind of...
I'll post it here so as anybody else experiencing the same thing will be able to figure it out out (there's nothing worse than seeing an empty thread for a problem that one is having).
It turns out that while I cannot access this URL via Internet Explorer, when used authenticating via IFD in custom code this works correctly.
I'd still be interested though in finding out why it works in my custom code but not Internet Explorer.
Related
I'm using sabre/dav library for my project and I'm having some difficulties with preventing default Windows WebDAV client "deleting" a file that shouldn't be deleted.
The implementation in the server part is ok and Forbidden statuses are thrown and acknowledged apparently by other clients (Finder, CyberDuck) that abort the deletion process with a forbidden error to user. The same statuses are thrown back at Windows client, but it seems that it simply "deletes" the files, that are still available on server-side. By refreshing the folder those files are again visible. It seems that Win client simply ignores the forbidden responses and virtually deletes folder, so that it isn't visible anymore.
The bigger problem is that if for a reason you decide to delete a folder, where there are protected and unprotected files/folders underneath, it deletes all of the unprotected ones, because it fails to acknowledge the first (or any) Forbidden response. Other WebDAV clients detect this and stop the deletion process therefore the designated folder and its child folders/files are untouched.
Example of a forbidden response, when trying to delete a folder or file:
HTTP/1.1 403 Forbidden
Server: nginx/1.8.0
Date: Wed, 29 Jul 2015 13:55:11 GMT
Content-Type: application/xml; charset=utf-8
Connection: keep-alive
X-Frame-Options: SAMEORIGIN
X-Powered-By: PHP/5.4.41-1~dotdeb+7.1
X-Sabre-Version: 2.1.3
Vary: Accept-Encoding,User-Agent
Content-Length: 320
<?xml version="1.0" encoding="utf-8"?>
<d:error xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns">
<s:sabredav-version>2.1.3</s:sabredav-version>
<s:exception>Sabre\DAV\Exception\Forbidden</s:exception>
<s:message>Permission denied to delete node</s:message>
</d:error>
Tested default Win client on Win8.1 x86.
Any idea how to force Win WebDav client to detect Forbidden responses and terminate deletion process?
Thanks.
As mentioned by Evert, I think you could bypass the problem with properties: https://learn.microsoft.com/en-us/openspecs/sharepoint_protocols/ms-wdvme/f83d826b-7fad-4f80-838c-5c7cc98cb59f
This is described here: How to make file READ ONLY when exposed through WebDAV
Edit: It works in terms of you can set the read-only flag on the file, but this does windows explorer not prevent from sending the Delete request and ignoring the 403 status code. Windows explorer seems just to be a bad webdav client....
I am an Oracle Developer, we have a requirement to read the contents of a filenet documents via oracle database , but OpenSSO is enabled in our environments which is currently blocking us to read the data from filenet server.
While checking with OpenSSO guys , they confirmed that while invoking the filenet URL via browser , a session cookie gets generated in OpenSSO which plays a significant role in authentication.
But , when we tried to invoke the same filenet URL via Oracle pl/sql ( i.e database) , we could see the cookie details as below. However, we don't understand , why opensso was still not showing 'successful authentication'.
We googled the error code X-AuthErrorCode: -1 which says "In the response you're going to receive "X-autherrorcode" header, if it's value 0, then the login was successful. Also, you need to check the
iPlanetDirectoryPro cookie for admin session id".
With all these information, can someone please help us in getting the root cause of the authentication failure in OpenSSO?
HTTP response status code: 200
HTTP response reason phrase: OK
X-Powered-By: JSP/2.1
Server: Sun GlassFish Enterprise Server v2.1
Cache-Control: private
Pragma: no-cache
Expires: 0
X-DSAMEVersion: (2011-March-02 18:42)
AM_CLIENT_TYPE: genericHTML
Set-Cookie:
AMAuthCookie=AQIC5wM2LY4SfcyXSVnslvF7a5TLMa4KXz5Op9tRKzczinU.*AAJTSQACMDE.
*; Domain=.companyname.co.uk; Path=/
Set-Cookie: amlbcookie=01; Domain=.companyname.co.uk; Path=/
X-AuthErrorCode: -1
Set-Cookie: AMAuthCookie=LOGOUT; Domain=.companyname.co.uk;
Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/
Set-Cookie: JSESSIONID=286b0c94d89dffd4f602831969ae; Path=/opensso;
Secure
Content-Type: text/html;charset=UTF-8
Date: Fri, 24 Oct 2014 14:19:23 GMT
Connection: close
While downloading a Text file from filenet, we got the content as below
HTTP Status 401 -
type Status report
message
descriptionThis request requires HTTP authentication ().
Sun GlassFish Enterprise Server v2.1.1
Can someone please help me at the earliest to get a solution for this ? Is this really possible to make the authentication success via oracle database packages rather than invoking the URL via browser ? if it is possible what should be next step I need to do ?
Any help would be much appreciated.
Thanks and Regards,
Remya Sudhakaran
I'm using googlerequest object to retrieve notification data for a certain serial-number in sandbox environment.
POST argument (xml) I send to google:
<?xml version="1.0" encoding="UTF-8"?><notification-history-request xmlns="http://checkout.google.com/schema/2"><serial-number>631274667786221-00005-6</serial-number></notification-history-request>
Response from curl:
HTTP/1.1 500 Internal Server Error
Content-Type: application/xml; charset=UTF-8
Date: Mon, 03 Jun 2013 12:28:57 GMT
Expires: Mon, 03 Jun 2013 12:28:57 GMT
Cache-Control: private, max-age=0
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Set-Cookie: S=payments_api=P4yzgVwZyqdAb7S_BUtJXw; Expires=Mon, 03-Jun-2013 12:58:57 GMT; Path=/; Secure; HttpOnly
Server: GSE
Transfer-Encoding: chunked
<?xml version="1.0" encoding="UTF-8"?>
<error xmlns="http://checkout.google.com/schema/2" serial-number="f9338a0b-b14a-4afc-956b-5618b9741245">
<error-message>Internal error in server</error-message>
</error>
Can't answer for Google but it seems Google Checkout sandbox seems to have been shutdown / deprecated (already). This is my guess/perception as I cannot login to my Google Checkout sandbox which was at https://sandbox.gogle.com/checkout/sell for sandbox related activity like integration settings, debug console, etc.
I do see that the former sandbox Merchant center is accessible at:
https://wallet-web.sandbox.google.com/manage
So you can check sandbox orders (across different APIs) but as stated, it doesn't have the knobs/switches relevant to Google Checkout (other APIs have different "consoles" for API settings).
I don't know why (if it's early deprecation)...perhaps it's late in the game as it pertains to Google Checkout retirement to start debugging now....
See Google Checkout deprecation/retirement in November 2013 announcement and more info.
Update
Q: issue happens when I set the following options and send them via curl(): $options['shopping-cart.buyer-messages.include-gift-receipt-1'] = 1; $options['shopping-cart.buyer-messages.special-instructions-1'] = '';
You mean in the initial Checkout POST you sent to Google? If so, that instruction tells Google to offer those screens at the Google Checkout web site (not yours). I believe you shouldn't provide values for one field only - you'll have to provide all fields if you want to pre-populate from your web site.
Sorry, really tough to debug without Integration Console (which was part of the original Google Checkout console/UI in both sandbox and production) - which likely would have shown more error detail....
In the document "Optimize Cache - Make the Web Faster - Google Developers", Google states that
It is important to specify ONE of Expires or Cache-Control
max-age, AND ONE of Last-Modified or ETag, for all cacheable
resources. It is redundant to specify both Expires and Cache-Control:
max-age, or to specify both Last-Modified and ETag.
I'm using the classes in Microsoft.WindowsAzure.StorageClient to upload images to a blob container, pratically the same code as can be seen in the open source project Azure Storage Explorer.
The resulting image is served with BOTH Last-Modified and ETag:
ETag: 0x8CFED5D3384112F
Last-Modified: Tue, 12 Mar 2013 17:21:43 GMT
So the next browser requests sends HTTP headers:
If-Modified-Since: Tue, 12 Mar 2013 17:21:43 GMT
If-None-Match: 0x8CFED5D3384112F
How can I force Azure Storage to use only one of the two directives to eliminate this redudancy?
The short answer is you can't.
When thinking about this it's important to remember that when you access blob storage you not accessing a file on a web server, you're using a rest API that happens to return files.
Microsoft offer no way to remove headers that they deem as essential to the storage API.
If you're worried about excessive headers, the response also includes several x-ms-... headers which are intended for clients of the API that aren't browsers.
Personally I would not worry that much about both tags being send back as this is actually recommended by RFC 2616.
13.3.4 Rules for When to Use Entity Tags and Last-Modified Dates
...
HTTP/1.1 origin servers:
...
... the preferred behavior for an HTTP/1.1 origin server is to send both a strong entity tag and a Last-Modified value.
An HTTP 1.1 client MUST use the Entity Tags in any cache-conditional requests, and if both an Entity Tags and Last-Modified are present, it SHOULD use both.
I hope that will clarify why both tags are sent back from the Azure Storage server.
The following is a http response header from a image on our company's website.
HTTP/1.1 200 OK
Content-Type: image/png
Last-Modified: Thu, 03 Dec 2009 15:51:57 GMT
Accept-Ranges: bytes
ETag: "1e61e38a3074ca1:0"
Date: Wed, 06 Jan 2010 22:06:23 GMT
Content-Length: 9140
Is there anyway to know if this image is publicly cacheable in some proxy server? The RFC definition seems to be ambiguous http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.1 and http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.4.
Run RED on your URL and it'll tell you whether the response is cacheable, among other information.
The headers you show appear to be cacheable.
If you would like to control the caching behavior of correctly configured proxies and web browsers, you might investigate using the Cache-Control and Expires headers to gain additional control.
Here is a webpage I had bookmarked that has one person's opinion of how to intepret the specifications you list (plus some other ones):
http://www.web-caching.com/mnot_tutorial/how.html
If you need to guarantee that someone sees a completely new image each time (even with misconfigured devices between you and them), you may want to consider using a randomized or GUID value as part of the URL.
Here is a tutorial on setting headers for proxy caching. Be sure to read the part about setting cookies!