So we have set up a api manager with a identity server as a key manager. Carbon.super is the only tenant that can create APIs in api manager, with no issues.
Internal/everyone have been granted with every permission on their tenants, users on different tenants can log onto the api manager publisher and store, on other tenants the users cannot create any api's with the following errors on the API manager removed the (at) references for making it less to read.
TID: [1] [] [2019-02-11 12:58:19,669] #test.dk [1] [AM]ERROR {org.wso2.carbon.governance.api.common.dataobjects.GovernanceArtifactImpl} - Error in associating lifecycle for the artifact. id: d9afaaa9-a2fe-479f-927b-658dc34393b6, path: /apimgt/applicationdata/provider/admin-AT-test.dk/WorldBank/1/api. {org.wso2.carbon.governance.api.common.dataobjects.GovernanceArtifactImpl}
org.wso2.carbon.registry.core.exceptions.RegistryException: Couldn't find aspectName 'APILifeCycle'
TID: [1] [] [2019-02-11 12:58:19,680] #test.dk [1] [AM]ERROR {org.wso2.carbon.apimgt.impl.UserAwareAPIProvider} - Error while performing registry transaction operation {org.wso2.carbon.apimgt.impl.UserAwareAPIProvider}
org.wso2.carbon.governance.api.exception.GovernanceException: Error in associating lifecycle for the artifact. id: d9afaaa9-a2fe-479f-927b-658dc34393b6, path: /apimgt/applicationdata/provider/admin-AT-test.dk/WorldBank/1/api.
TID: [-1234] [] [2019-02-11 12:58:19,684] ERROR {JAGGERY.site.blocks.item-design.ajax.add:jag} - org.mozilla.javascript.WrappedException: Wrapped org.wso2.carbon.apimgt.api.APIManagementException: Error while performing registry transaction operation (/publisher/modules/api/add.jag#108)
for the full issue log go to this link: https://pastebin.com/9LDv3u8Q
i can create applications on the /store with the tenant users.
the apilifecycle doesnt seem to be linked to tenants made thus making it impossible to make API's on the server
i have tried to copy APILifeCycle.xml from the api manager to the same location on the Identity server. the carbon super does have the apilifecycle in the extensions tab on carbon part of the apimanager but tenants does not.
i have have been researching on how i could fix this some other sources i have attempted with no luck is
Link: http://ishara-cooray.blogspot.com/2018/01/how-to-fix-orgwso2carbonregistrycoreexc.html
It have been set up like this link: https://docs.wso2.com/display/AM260/Configuring+WSO2+Identity+Server+as+a+Key+Manager
What i expect to happen
Users from tenants that can create and publish api's on their tenant domain
We have provided a fix for this issue and if you could take a WUM update (https://wso2.com/updates/wum) you could get the patch for this issue.
If you do not have access to WUM updates, Then try putting the APILifecycle.xml file to /repository/resources/lifecycles folder in IS (this shouldn't work with existing tenants. New tenants should work). For existing tenants, you could log in to management console (https://localhost:9443/carbon) and navigate to Extensions > Configure > Lifecycles and upload the APILifecycle.
Thanks
Related
I'm trying to use a 3rd party app that requires gce_client_id and gce_client_secret keys. In order to generate them, I browsed to the Credentials icon and tried to create an OAuth 2.0 Client ID. However, the system offers me 7 different types of apps but none of them fits the app profile. The app is supposed to be run from a gce VM and spin up other gce VMs so it really has nothing to do with web apps or similar. Am I doing this right or is there any other way to generate the gce id and server keys? Thanks.
P.S. I tried using the keys generated using the option: "Desktop app" but it's producing the following error:
ERROR Error creating instance <HttpError 403 when requesting https://compute.googleapis.com/compute/v1/projects/watchful-origin-244417/zones/us-central1-a/instances?alt=json returned "Request had insufficient authentication scopes.">
2020-08-10 18:08:11 deployator0002 elasticluster[3768] ERROR Could not start node compute002: Error creating instance <HttpError 403 when requesting https://compute.googleapis.com/compute/v1/projects/watchful-origin-244417/zones/us-central1-a/instances?alt=json returned "Request had insufficient authentication scopes."> -- <class 'elasticluster.exceptions.InstanceError'>
Firstly, this post has nothing to do with elasticsearch as that app is totally unrelated to elasticluster which is the app of interest (probably no need to change the original tags). The fact is that Google changed the options for OAuth 2.0 and eliminated the 'Other' option from its list of app types. That was the origin of the issue and the developer is already aware of it. Thanks.
Short description:
Im using laravel application which already has system for logging in with microsoft account. That system works, but this is the first time im working on it, and i can not establish locally that users can sign in with their microsoft account into the application. Because system in the application works, and i get error when logging in, the issue must be in my configuration at Azure portal.
My configuration is as following:
I have created tenant and registered app in it. My SAML config is as following:
Entity ID: https://login.microsoftonline.com/tenant-id/saml2
Reply URL (Assertion Consumer Service URL): https://sts.windows.net/tenant-id/
In my .env i have set following values:
AZURE_AD_CALLBACK_URL=/login/microsoft/callback
AZURE_AD_CLIENT_ID=id-of-the-application-in-tenant
AZURE_AD_CLIENT_SECRET=tenant-secret-key
SAML2_AZURE_SAML_ENABLED=true
SAML2_AZURE_IDP_SSO_URL="https://login.microsoftonline.com/tenant-id/saml2"
SAML2_AZURE_IDP_ENTITYID="https://sts.windows.net/tenant-id/"
SAML2_AZURE_IDP_x509="tenant-id"
SAML2_AZURE_SP_ENTITYID="https://some-app.com/"
I get following error after entering my credentials:
AADSTS700016: Application with identifier 'https://someapp/' was not found in the directory 'tenant-id'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant.
I have added user to the application, which i use to test login, so this error is totally confusing for me.
I dont know if i provided all neccessary info, but if some missing i will provide them.
I hope someone knows what is wrong with the configuration
The tenant id is a GUID. Have you used this or are you using the "tenant-id" string?
Also, the ACS is an endpoint in your application - not an Azure URL.
There's a lot of new information regarding how to programatically download Google Play reports using gsutil tool. Google Play uses a bucket to store these reports, just like Google Cloud Storage does. I'm already able to download reports from Google Play bucket without a problem. For example:
gsutil cp gs://pubsite_prod_rev_<my project id>/stats/installs/installs_<my app id>_201502_overview.csv .
On the other hand, gsutil offers a feature to watch Google Cloud Storage buckets, so you can receive notifications every time an object in the bucket changes (gsutil notification watchbucket). I am also able to enable notifications in buckets created in my own Google Cloud projects.
The problem is, I'm not able to enable notifications in my Google Play bucket. Is it even possible? I get an AccessDeniedException: 403 Forbidden error when calling:
gsutil notification watchbucket -i playnotif -t sometoken https://notif.mydomain.com gs://pubsite_prod_rev_<my project id>
I've followed all the steps here, being specially careful with those regarding identifying a domain to receive notifications.
As I mentioned above, I'm already able to do all the process I need, but with my own buckets in Google Cloud, not with the Google Play bucket.
The Google Play project has been linked to a Google Cloud project. It did so automatically when I enabled Google Play API access (Google Play Developer Console -> Configuration (left menu) -> API access).
The Google Play project owner and my own Google Cloud project owner is the same.
This owner has successfully registered and validated the domain used to receive the notifications (following the example, I validated both just in case: notif.mydomain.com and mydomain.com, using https in the Google Webmaster Tools)
These domains have also been whitelisted in the Google Developers Console (left sidebar -> APIs & Auth -> Push).
I've successfully enabled notifications in my own Google Cloud buckets using either the project owner account or a service account I created. I've already tried using both (owner and a corresponding service account) in the Google Play bucket, without success.
Any ideas will be greatly appreciated. Thanks!
EDIT:
I had already followed the steps here, but using different procedures (as explained in the comment below). Following Nikita's suggestion, I tried to follow the steps using the same procedure.
So I configured gsutil (through gcloud) to use the owner account:
gcloud config set account owner-of-play-store-project#gmail.com
and while trying to grant full access to the service account, I encountered this error:
$ gsutil acl ch -u my-play-store-service-account#developer.gserviceaccount.com:FC gs://pubsite_prod_rev_my-bucket-id
CommandException: Failed to set acl for gs://pubsite_prod_rev_my-bucket-id/. Please ensure you have OWNER-role access to this resource.
So, I tried to list the default ACL for this bucket, and found:
$ gsutil defacl get gs://pubsite_prod_rev_my-bucket-id
No default object ACL present for gs://pubsite_prod_rev_my-bucket-id. This could occur if the default object ACL is private, in which case objects created in this bucket will be readable only by their creators. It could also mean you do not have OWNER permission on gs://pubsite_prod_rev_my-bucket-id and therefore do not have permission to read the default object ACL.
[]
Conclusion:
It really makes me think that, even using the project owner account, this account doesn't have the OWNER role on the Play Store bucket. This means ACLs can't be modified, not even listed, as well as notifications can't be enabled since, sadly, we don't really own the bucket.
At the moment, you cannot. Google Play owns these buckets, and end users do not have the bucket FULL_CONTROL access necessary to subscribe to Object Change Notifications.
Background
I have a Web API registered in Azure AD and secured using WindowsAzureActiveDirectoryBearerAuthentication (OAuth2 bearer token). This is a B2B-type scenario where there are no interactive users - the applications calling the API are daemon-like background apps. As such, I don't need any consent experience - I just want trusted applications to be able to call the API, and other applications - even if they present a valid OAuth token - to be denied.
What I've tried
This sample seemed to describe my scenario almost exactly. However, the way it determines if a caller is a trusted app or not is by comparing the clientID presented via a claim by the caller to a hard-coded value. Obviously you could store the list of trusted clientIDs externally instead of hardcoding, but it seems like I should be able to accomplish this via configuration in the AAD portal so that a) I don't have to maintain a list of clientIDs, and b) I don't have to write my own authorization logic.
It seems like I should be able to define a permission for my API, grant that permission to each calling app in AAD (or a one-time admin consent), and then in my API just check for the presence of that permission in the scp claim.
From looking at the portal it seems like this is what Application Permissions are intended for:
I can create a permission just fine via the application manifest. Unfortunately, I can't figure out how to specify that it's an Application Permission, not a Delegated Permission! I tried changing the type from User to Admin as described on MSDN, but that seemed to have no effect.
"oauth2Permissions": [
{
...
"type": "Admin",
...
}
Question
Am I correct that Application Permissions are the best solution for my scenario? If so, how do I configure it? Or, as I fear, is this yet another feature that is On The Roadmap™ but not currently functional?
Ben, Application Permissions are declared in the appRoles section of the manifest. Indeed, if you declare an appRole called say 'trusted' in your resource application's (storage broker demo) manifest - it will show up in the Application Permissions drop down there. Then, when you assign that Application Permission to the client app - the access token that the client app will receive using the client credentials OAuth flow will contain a roles claim with value 'trusted'. Other apps in the tenant will also be able to get an access token for your resource app - but they wont have the 'trusted' roles claim. See this blog post for details: http://www.dushyantgill.com/blog/2014/12/10/roles-based-access-control-in-cloud-applications-using-azure-ad/
Finally, the above way to assign an application permission to a client app only works when both the resource and client application are declared in the same directory - if however these apps are multi-tenant and a customer will install these apps separately - a global admin from customer's directory will need to consent to the client app - which will result in the application permission getting assigned to the instance of client app in the customer's tenant. (my blog post covers this too)
Hope this helps.
ps: if you're stuck - feel free to ping me on the contact page of http://www.dushyantgill.com/blog
I'm using SonarQube (ver4.3.2) and I'm trying to get project list that the API caller user is allowed to see. I found a following API which can get project list:
http://nemo.sonarsource.org/api/resources
When I call this API, then I get all SonarQube's projects though the API caller user doesn't have brows permission for some projects. The API caller a user means user which is authorized by http basic authentication. I want to get only project list which the API caller user can see.
Is it possible?
Regards,
Michael
When calling the "/api/resources" WS, you will get only the projects you are allowed to see - which indeed means projects for which the user has the "Browse" permission.
If you get all the projects of your SonarQube instance when calling this WS, this means that your permissions allow this and you should review them. For instance, maybe the group "anyone" is set on the "Browse" permission of each project? (which is the default configuration of SonarQube by the way).