Unable to get the configuration file WcmApiConfig.properties - filenet-p8

I have an issue with FileNet-p8:
All simply, I do not have access to a WorkFlow already designed. I have always a popup asking me to authenticate , I use the same Login/psw to access to the Workplace. But always I get the same message :
Unable to get the configuration file WcmApiConfig.properties
I'm workin with the IBM JVM 1.6 and the Firefox browser.
Thanks.

In our environment, you must use IE in order to avoid that error message. We have not been able to get FireFox to work with WorkplaceXT or PCC.
Even when launching PCC from ACCPE we need to use IE.
If you see the login screen, don't even bother with your credentials. It simply will not work.

Related

Google Drive API Console: Error saving Drive UI integration page

I have a webapp in production that interacts with Google Drive through Google Drive API.
I need to change some settings in Drive interaction but I can't save.
When I save the Drive UI integration page, I receive this error:
There's a problem at our end.
Please try again. If the problem persists, please let us know using
the "Send feedback" link below. Thanks!
(spying Network console: there is an Internal Server Error in a POST call)
I tried to send feedback for months: nobody answers and the bug is still there.
I tried also to create another project: I can save the first time but then the bug returns.
How can I do? Has someone the same problem?
Is there a way to receive a reply from Google? Is there some workaround?
Thank you.
i think that problem must be Client ID
before adding Client ID, go to the Credentials -> OAuth 2.0 Client IDs
then select edit your Client ID. after that your production site url add to Authorized JavaScript origins and Authorized redirect URIs.
then enter your Client ID in Drive UI integration page
For myself trying to get the Drive UI configured I noticed a couple of errors (that don't have any specific error messages)
When adding in an Open URL it has to be a valid domain, so for instance I tried to test it out with local host, to no avail. However something like https://devbox.app.com worked, but something like https://localhost:8888 does not. Even though https://localhost is a valid javascript origin in the client_id configuration (at least for the app I am working on, not sure about other apps), localhost doesn't work as an open URL.
When adding in the mimeTypes it needs to be in the format */* and can include custom mimeTypes like application/custom+xml and application/custom-name+json not sure for other custom types that are not in a particular format like xml or json. Also not sure about wildcards.
When adding in file extensions do not add in the '.' just the name of the file extension.
The app icon I found only failed to upload the image when the image wasn't the exact dimensions, I actually ended up editing some icons in photoshop to change the pixel x pixel values as a quick work around during dev.
That worked for me to get it to save and I tested it with a file that had a custom mimeType (application/custom-name+xml specifically) and custom file extension!

Unable to run typesafe activator ui in cloud9

I was unable to run typesafe activator in cloud9 :
The activator page loads OK but then I get the following error messages :
in the browser :
"Connection lost; you will need to reload the page or restart
Activator. It's also possible that Activator is open in another tab,
which causes this error."
in the cloud9 terminal :
"! #6j9pn9913 - Internal server error, for (GET)
[/home/stream?token=cba94...64394] -> play.api.Application$$anon$1:
Execution exception[[RuntimeException: Bad CSRF token for websocket]]"
Any help on how to solve this ?
Activator listens on 127.0.0.1 and is not even supposed to be listening on an external interface; it isn't completely clear to me why you can connect to it at all.
But however that connection works, it looks like the result is that the CSRF check fails. The CSRF check is checking that the query parameter there (?token=cba94...) matches a cookie that should have been set by the Activator page load. This demonstrates that the /home/stream request (to open the websocket) is coming from a page that has the cookie, i.e. from the same domain. Perhaps Activator doesn't know the domain you are loading the page from and therefore the cookie gets lost? Just a guess.
When the CSRF check fails that would then fail the websocket and cause the "Connection lost" error, though that error can also be caused by other things (such as proxies and antivirus software) that interfere with websockets.
You could possibly fix this, or take a step towards fixing this, by configuring the http.address system property to be picked up here: https://github.com/typesafehub/activator/blob/52012321b3a5a9f9dcf53582664e385d92763718/ui/app/activator/UIMain.scala#L130
You could also try setting application.defaultCookieDomain to the domain you are using (this is a Play config option and Activator's UI is a play app).
However:
you may well find other bugs in this scenario - it is not tested or supported
it is not at all secure unless you have some kind of authenticated proxy in front of it (there's no auth on the activator UI, and the UI has buttons to view and delete files, etc).
The activator shell command line is maybe a better option when you have your project build on a headless server, though I won't say running the UI is 100% impossible - you might be able to get it to work.

Allowing Cross domain ajax calls from firefox

I want to change the settings of firefox so as to allow it to make cross domain ajax calls. Since due to the security feature of the firefox it doen't allow ajax calls to be made. I know if it is in same domain it will allow. I have a code given bellow which in safari works fine but firefox doesn't display the results when it calls csce server then since the code is on local machine doesn't allow it and returns error. I know it will start working if I load my this code to csce server but I want to run the code from my machine. So can anyone help me in resolving this. I have spent past couple of days just searching for this solution.
Kindly suggest how to achieve this or should I go with some older version of firefox?
I googled and set the parameters of browser in config file as specified in this site but it still doesn't work.
http://code.google.com/p/httpfox/issues/detail?id=20
Maybe you could use privoxy and tell it to inject something like "Access-Control-Allow-Origin: *" in the server response.
To do this, you would have to go into the file user.filter (create it if it doesn't exist) in privoxys configuration directory and insert something like this:
SERVER-HEADER-FILTER: allow-crossdomain
s|Server: .*|Access-Control-Allow-Origin: *|
Instead of Server, you can also use any other header that's always present and you don't need.
And this into user.action:
{+server-header-filter{allow-crossdomain}}
csce.unl.edu
Note: I didn't test it.
https://developer.mozilla.org/En/HTTP_access_control
http://config.privoxy.org/user-manual/
This appears to enable XSS from file:// pages in Firefox 4, although it prompts you so might not be suitable for more than simple test pages:
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");

Troubles with my open id provider. How to debug?

I have my own openid provider on my website, with phpmyid. It worked flawlessly until now, but apparently now it's not working anymore. I am unable to login anywhere I tried. How can I debug what's going on, to understand where's the problem ?
I can add more details if required, but if I can figure it out by myself without having to paste stuff it would be better.
Without any details, all I can say is read the logs (if phpMyID provides any), and capture the browser redirects with something like TamperData to see if there's anything obviously wrong there.
You could also try http://test-id.org/

MOSS search crawl fails with "Access is denied ..."

Recently the search crawler stopped working on my MOSS installation. The message in the crawl log is
Access is denied. Check that the Default Content Access Account has access to this content, or add a crawl rule to crawl this content. (The item was deleted because it was either not found or the crawler was denied access to it.)
The default content account is an admin on the site collection that I am trying to crawl.
Almost every result for this error on Google tells me to add the DisableLoobackCheck registry key with a value of 1. I have done this and rebooted and the error continues.
The "Do not allow Basic Authentication" checkbox in my crawl rule screen is unchecked.
Is there anything else that could be causing this error? Something with file system or database permissions maybe?
Edit: All signs seem to indicate that the "DisableLoopbackCheck" should fix this, but it doesn't seem to work. Could I be doing something wrong when I enable this?
I'm doing it in My Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa, where I create a new DWORD key called DisableLoopbackCheck and give it the hex value 1.
It turned out not to be related to DisableLoopbackCheck. The problem was that the search was accessing the site through its external URL. You are supposedly not supposed to be able to access a site from within a server using the same URL that you use to reach it from the outside, at least in pre-SP1 MOSS. But I was doing this for about two years somehow. MS Support tells me they don't quite understand how it was ever working. So it looks like I ran into an issue that should have been manifesting all along. I'm not sure what caused it to appear suddenly, maybe some routine patching of the server. The solution was to extend the web application so it was accessible internally through the machine name, then point the crawler at that.

Resources