oauthd not asking me to set a user/pass on first use - oauth.io

I'm trying to figure out the oauthd system ( https://oauth.io/docs/oauthd) , so I can do social media authentication on my own server, rather than using theirs (which seems to go offline a fair bit for me)
Anyway, I've installed it, run init to create the instance, and then start, as per the below output:
oauthd start
Initializing plugins engine
Loading admin-auth
Loading slashme
Loading request
Loading front
oauthd start server
oauthd listening at http://:::6284 for https://example.com:6284
Server is ready (load time: 1.85s) Wed, 11 Mar 2015 17:20:24 GMT
Now, as per the document - I should go to:
http://example.com:6284/admin
...and its MEANT to ask you to setup a user/password, which will be the admin logins. Unfortunately, it doesn't do this. I just get sent to:
http://example.com:6284/login ... where it asks me to login with an email/password, both of which I don't have yet:
Can anyone explain what step I'm missing? I've love to get this going - as I'm not a fan of having to use 3rd party systems (i.e oauth.io), as you are at their mercy when it goes offline.

Logging in for the first time actually creates your admin user with the entered credentials. So for example if you enter 'admin#yourmail.com' as the e-mail and 'letmepassplease' as the password, you'll be able to log in with these credentials in the future.
I can see why it's not clear, and we will probably improve this later. In the meantime you can create an issue about this on the Github repo.
Hope this helps :)

Related

Setting up ErpQueryEndpoint Destination for VDM

I have created a destination for VDM called ErpQueryEndpoint and have unsuccessfully attempted to obtain business partner info with one of the java VDM tutorials. Below is an export of that destination - I've tried this with and without TrustAll = true. When I use the 'Check Connection' button on the Destination screen, I get "302: Redirect" instead of 200. When I attempt to navigate to the URL below from Chrome, it re-directs me to a non-SAP logon screen. (I believe our Basis team has tried to set-up SSO with Azure.) I'm wondering if this redirection is what is causing my java VDM program to fail.
#Password=<< Existing password/certificate removed on export >>
#
#Mon Mar 11 15:17:38 UTC 2019
Description=ErpQueryEndPoint for java programs that use Virtual Data Model (VDM)
Type=HTTP
Authentication=BasicAuthentication
Name=ErpQueryEndpoint
ProxyType=Internet
URL=https\://my######-api.s4hana.ondemand.com
User=S000#######
Thanks for your help. The tutorial program is now working. Getting a 302: Redirect when clicking 'Check Connection' was not the problem. Even though only the base URL was in the destination, I still needed to supply the credentials from the business partner communication arrangement. (It is also works for sales contracts when I supply the credentials from its respective communication arrangement.)

Hyperledger-Composer REST Authentication

Request assistance with hyperledger composer. I have created a network and web app around the REST API that was built with the composer-rest-server. I am able to add participants, assets and execute transaction with the default settings. I am now trying to add authentication to the REST server as well as add identities to new participants. However I got stuck. I have reviewed the information at
https://hyperledger.github.io/composer/integrating/enabling-rest-authentication.html
But I'm not sure where I should place the export COMPOSER_PROVIDERS='{.... information to continue the setup.
Any assistance, tips and tricks are much appreciated.
Ok so I figured it out. The problem was that I was running off an older version of composer-rest-server.
I installed the developer tool back in Sep 17 and did the tutorial soon after. I tried the tutorial again and noticed that the deployment command was different and it would not let me deploy my network.
So I updated the composer-rest-server and component cli and it deployed fine. I then followed the steps on the authentication webpage that I referenced above and it worked as intended. I deployed my personal network with the new command and it worked as intended.
Lesson learned this stuff is still being updated and I should be more aware on what changes. Thank you very much #nilakantha singh deo
Open a new terminal from inside the project folder.Format your COMPOSER_PROVIDERS in notepad according to the document you mentioned and copy the whole message and paste it in the terminal.Then you can echo it (see it) by typing the following.
echo $COMPOSER_PROVIDERS
It should ideally return the same json file.
Then make sure that the compopser-rest-server is running with multiuser mode and authentication enabled in the same terminal where you echoed and saw the COMPOSER_PROVIDERS.
In browser now type
localhost:3000/auth/github
It should ask for authentication .Rest of the steps are listed in the document you mentioned.
Cheers!

"New version available" with service worker and sw-precache

I'm trying to use sw-precache, but I must be doing something wrong!
I'm mostly using the demo code available from the github repo and can't seem to get updates to the app to come through. Once it's cached the first time, it never checks for new versions.
I was expecting that when I publish a new service worker, the browser would request the new service worker and update the cache accordingly in the background. Then using the registration code in the example, I would be able to prompt the user to refresh and get the latest version from their newly refreshed cache.
Would really appreciate if someone could please point me in the right direction.
Example
To demonstrate the problem, I've created an isolated example here:
https://github.com/stevenocchipinti/sw-precache-demo
The example uses a basic skeleton from create-react-app which has a built in build task which take care of fingerprinting the filenames, etc.
I suspect the problem is with me caching everything by using the following sw-precache config:
{
"staticFileGlobs": [ "build/**/*.*" ],
"stripPrefix": "build/"
}
There are more accurate steps in the repo's readme, but the basic steps I'm taking to reproduce the problem are as follows (with my probably incorrect expectations).
Steps and Assumptions
Browse to the app for the first
I should see Content is now available offline! in the console
Reload the page
The message in the console should not appear again because the service worker is installed, but the page should still work.
Go offline and reload the page
The page should still work
Make a visible change to the source code
Rebuild (run the build task and sw-precache)
This is where my understanding must be wrong
Reload the page
The service worker should update the cache in the background
When its done, you should see New or updated content is available. in the console
The actual visible changes should not be visible until the next reload
Reload the page again
The browser will use the new cache this time around
The changes should be visible now!
There shouldn't be any messages in the console
The problem
Once the app has been cached initially, it will never update unless you unregister the service worker or force a reload.
I'm not sure how to make this work - any help would be greatly appreciated!
After replicating your development hosting environment, I can see that you're serving your service-worker.js file with a browser HTTP cache lifetime of one hour:
There's more information as to why this is leading to the behavior you're seeing, along with best practices, in this previous answer. As mentioned at the top of that answer, browsers plan on changing their behavior to stop honoring the HTTP cache for the service worker file by default, mainly due to the type of confusion that you're experiencing here. For the time being, though, the production versions of both Chrome and Firefox continue to honor those headers.

Sporadic redirects on secondary magento store page

Recently transferred my Magento 1.7 store to a new host, and started having a frustrating problem.
We've got the store sitting behind a login shell - you can see it at http://www.seacadetshipsstore.com. Base exchange takes you to the root store (/magento/), and the gearlocker login takes you to a secure sub-store (/magento/gearlocker/).
The problem is, ever since transferring to the new host, /magento/gearlocker/ is sporadically redirecting to /magento/. I can only reproduce it 1 in every 10 times, but customers are constantly complaining that they can't access the secure store for this reason.
I've also noticed that if I turn off the security and have clients navigate to /magento/gearlocker/ directly, it seems to fix the problem for most - they no longer get the redirect after logging in. Only a few of them still have the error, and they're all PC users on various browsers.
I've set up a demo login for stack:
https://www.seacadetshipsstore.com/login.php
U: stack_login
P: thanks
I doubt it's an issue with the base url or base link url, otherwise the error would be much more consistent. I've gone through magento's official tutorials and made sure the secondary store was set up properly (remember, it was working fine on the old host). I also know it's not anything to do with the login shell - all it does is validate the user's login and redirect to /magento/gearlocker/.
Can anyone reproduce this error? Can anyone tell me what's going on, or how I might fix it? Thanks in advance!

MOSS search crawl fails with "Access is denied ..."

Recently the search crawler stopped working on my MOSS installation. The message in the crawl log is
Access is denied. Check that the Default Content Access Account has access to this content, or add a crawl rule to crawl this content. (The item was deleted because it was either not found or the crawler was denied access to it.)
The default content account is an admin on the site collection that I am trying to crawl.
Almost every result for this error on Google tells me to add the DisableLoobackCheck registry key with a value of 1. I have done this and rebooted and the error continues.
The "Do not allow Basic Authentication" checkbox in my crawl rule screen is unchecked.
Is there anything else that could be causing this error? Something with file system or database permissions maybe?
Edit: All signs seem to indicate that the "DisableLoopbackCheck" should fix this, but it doesn't seem to work. Could I be doing something wrong when I enable this?
I'm doing it in My Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa, where I create a new DWORD key called DisableLoopbackCheck and give it the hex value 1.
It turned out not to be related to DisableLoopbackCheck. The problem was that the search was accessing the site through its external URL. You are supposedly not supposed to be able to access a site from within a server using the same URL that you use to reach it from the outside, at least in pre-SP1 MOSS. But I was doing this for about two years somehow. MS Support tells me they don't quite understand how it was ever working. So it looks like I ran into an issue that should have been manifesting all along. I'm not sure what caused it to appear suddenly, maybe some routine patching of the server. The solution was to extend the web application so it was accessible internally through the machine name, then point the crawler at that.

Resources