I am following the ClickDimensions article for creating a new Domain Record, and as the article says, I first created Cname records on my domain adminstrator page. My Cname is something like:
host: web
value: analytics.clickdimensions.com
So, just like the documentation says.
Problem is, when I am on ClickDimensions > Domain Records, I fill the values for domain & web content cname and click "SAVE", I get an "Object Null reference error" and then the yellow asp.net error page...
I have tried to find any answers online, but no luck ( I actually found one person complaining about a very similar, possibly the same, issue on their support pages, but he got no answer).
So, after sessions with support it happened to be some error on their back end.
Related
I successfully deployed my project on Heroku. The problems occurred when I wanted to set a custom domain for it (bought namecheap.com).
In neither Heroku nor Namecheap I get an error. Everything is correct, but somehow I cannot reach the page, and running an analysis on the following website returns these errors:
https://dnsviz.net/d/amsterdamtruffles.com/dnssec/
Errors:
-No DNSKEY records found.
-The CNAME RRset was not signed by any keys in the chain-of-trust.
Can anybody point me in the right direction to solve these errors?
I also had this occur, and the cause was setting the root domain (#) CNAME to my.github.io. The correct setup shown below, has the root and www pointed to the github.io servers, and also includes the www CNAME redirection and github pages TXT domain verification record.
Finally, there is a CNAME record in my github pages folder with the domain name to display, and with the errant root CNAME record gone on namecheap, it all works.
this is the first time i am buying a custom domain and i want to access my heroku java spring app by using the new domain name instead of the one provided by heroku.
The problem is that i get a ACM status:DNS redirect not forwarding path exception in heroku domain table,so it does not work.And i also do not know what this is and how to fix it even if i looked here https://devcenter.heroku.com/articles/automated-certificate-management#view-your-certificate-status
What i did so far:
I bought a custom domain name from godaddy.
Then i went to heroku settings >domains>add domain.
After that i got a new domain in the table.
I went to godaddy Records in order to add a new CNAME where i have added the domain name as name ( mycustomdomain. com) and the very long DNS Target that was autogenerated by heroku as value.
Now i went back to heroku and this ACM Status message is displayed:
DNS redirect not forwarding path
What am i missing?What do i have to do next to be able to successfully connect the 2 parts?
I saw some post arround here with "redirecting" but i am not sure if that applies to me or not.
Any help would be greatly appreciated.
If you're not already doing so, the host name for the CNAME record should be 'www' - and then the 'points to' would obviously be the URL Heroku gave you.
On the Heroku side, where you have listed the domain name itself, make sure to include the www. on the address itself - This is something I neglected to do a few weeks ago, once i added it - everything started working correctly.
I came across your issue because i've added a second URL (.com variation) and i'm getting that message with this second domain - Like you i'm unsure what i have missed...
we have an integrated with google picker(read-only scope,Docs view) it use to work fine but recently some users are getting blank screens as soon as the pop up shows but when they select some filter everything starts working fine after that no problems.
using developer tools i see all apis returning 200 for that first request
but there were no docs in response(i believe this is the api responsible for bringing docs in picker 'https://docs.google.com/picker/pvr')
when there are no docs returned in above api google is calling another api i assume it is to log error's probably(//docs.google.com/picker/ohnoes)
this api has following error params in it
&error=Cached and requested query mismatch
&line=Not available
&viewToken=["all",null,{"query":null}]
&ms=97
&transferDocs=false
&numErrors=1
has anybody else faced the similar problem
what do error "Cached and requested query mismatch" means in context of drive docs
Fyi - most accounts facing this problem seems like are of company domain for ex "jondoe#company.org"(this is a google account with company domain)
Filters Image
Thanks for your help.
not sure but looks like issue was may be related to google bug
https://issuetracker.google.com/issues/64825685
for me the code that was not working was:
addView(google.picker.ViewId.DOCS)
replaced this code with below code which works as expected
var view = new google.picker.DocsView();
view.setIncludeFolders(true).setOwnedByMe(true).setParent('root');
addView(view).
I'm trying to set up an MVC application that will service several facebook applications for various clients. With help from Prabir's blog post I was able to set this up with v5.2.1 and it is working well, with one exception.
At first, I had only set up two "clients", one called DemoStore and the first client, ClientA. The application determines what client content and facebook settings to use based on the url. example canvasUrl: http://my_domain.com/client_name/
This works for ClientA, but for some reason when I try any DemoStore routes I get a 500 error. The error page points to an issue with the web.config.
Config Error:
Cannot add duplicate collection entry of type 'add' with unique key attribute 'name' set to 'facebookredirect.axd'
I am able to add additional clients with no problem, and changing DemoStore to something like "demo" while using the same facebook application settings works fine also.
Working calls:
http:// localhost:2888/ClientA/
http:// localhost:2888/ClientB/
http:// localhost:2888/Demo/
Failing call:
http:// localhost:2888/DemoStore/
I was thinking this might be an MVC issue, but the Config Error points to the facebookredirect handler. Why would the SDK try to add this value to the config during runtime, and only for this specific client?
Any insight would be greatly appreciated.
I managed to figure out what went wrong here. Silly mistake..
After I had set up the application routes to require the client_name I changed the Project Url in the project properties to point to demostore by default. When I hit ctrl+S a dialog popped up that I promptly entered through without reading.
When I changed the Project Url, IIS Express created a new virtual directory for the project. This was the source of my problem. Why? I'm not sure, but once I removed the second site from my applicationhost.config I was able to access the DemoStore routes.
Moral of the story: read the VS dialog messages!
Recently the search crawler stopped working on my MOSS installation. The message in the crawl log is
Access is denied. Check that the Default Content Access Account has access to this content, or add a crawl rule to crawl this content. (The item was deleted because it was either not found or the crawler was denied access to it.)
The default content account is an admin on the site collection that I am trying to crawl.
Almost every result for this error on Google tells me to add the DisableLoobackCheck registry key with a value of 1. I have done this and rebooted and the error continues.
The "Do not allow Basic Authentication" checkbox in my crawl rule screen is unchecked.
Is there anything else that could be causing this error? Something with file system or database permissions maybe?
Edit: All signs seem to indicate that the "DisableLoopbackCheck" should fix this, but it doesn't seem to work. Could I be doing something wrong when I enable this?
I'm doing it in My Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa, where I create a new DWORD key called DisableLoopbackCheck and give it the hex value 1.
It turned out not to be related to DisableLoopbackCheck. The problem was that the search was accessing the site through its external URL. You are supposedly not supposed to be able to access a site from within a server using the same URL that you use to reach it from the outside, at least in pre-SP1 MOSS. But I was doing this for about two years somehow. MS Support tells me they don't quite understand how it was ever working. So it looks like I ran into an issue that should have been manifesting all along. I'm not sure what caused it to appear suddenly, maybe some routine patching of the server. The solution was to extend the web application so it was accessible internally through the machine name, then point the crawler at that.