Is there a way to get a list of projects and API calls associated with a user's Google Workspace account? - google-api

I'm performing a large clean up of Google Workspace accounts and I'd like to programmatically determine whether any of the accounts have projects associated with them, and if so, when the last API calls associated with that project were made. Is there any way to do this programmatically via the Google Admin (or some other) APIs? Thank you

Yes... probably ;-)
This is a naive solution and I will be interested to see better ways to do this.
Please run this on a subset of your Projects and Users to ensure it addresses your need
For you to consider:
You write "Projects" but identities can be bound to many Google Cloud resources (Organizations, Folders, Buckets etc.) too
How many Projects and Users are there?
serviceAccount: should be excluded but what about other identities?
We'll filter by log entries for user: (?) currently in a Project Policy.
Org Admin
You'll need to use an Org Admin identity.
List all Projects
PROJECTS=$(\
gcloud projects list --format="value(projectId)")
for PROJECT in ${PROJECTS}
do
echo "Project: ${PROJECT}"
...
done
Get each Project's Policy's user:
Filter the policy by members of the form user:{email}
Extract the value {email} from user:{email}
USERS=$(\
gcloud projects get-iam-policy ${PROJECT} \
--flatten="bindings[].members[]" \
--filter="bindings.members:user" \
--format="value(bindings.members.split(\":\").slice(1:))")
echo "Users: ${USERS}"
Filter Audit Logs actually Admin Activity Logs
Grep the activity logs for the last 30 days for the most recent (!) log entry for this user.
for USER in ${USERS}
do
echo "User: ${USER}"
FILTER="
logName=\"projects/${PROJECT}/logs/cloudaudit.googleapis.com%2Factivity\"
protoPayload.authenticationInfo.principalEmail=\"${USER}\"
"
LOG=$(gcloud logging read "${FILTER}" \
--project=${PROJECT} \
--freshness="30d" \
--order=desc \
--limit=1)
printf "Log:\n%s" "${LOG}"
done

Related

Edit my script to get ips based on Iam label

Is there a gcloud command to get all the gcp project labels and --filter by organization. Please have a look at the screenshot:
[![gcp_org_screenshot][1]][1]
There is no single command that lists labels.
You can list projects in selected organization, then list labels in each of these projects:
#!/bin/bash
for PROJECT in $(gcloud projects list --filter="parent.id=ORGANIZATION_ID" --format="value(name)");
do
gcloud projects describe $PROJECT --format="value(labels)";
done
You can find more information on listing and filtering here.

SonarQube API: Retrieving a list of users assigned to a project permission?

I'm trying to find a list of users for a specific project (by projectKey) who possess the issueadmin permission. I've found a documented API that gets me pretty close:
api/permissions/search_project_permissions
but the response that I get back only has summary information: counts of groups/users for each permission type.
search_project_permissions response
Does anybody know if there's a way to get to the login details for the users?
There is an "internal" web service (meaning it could change without notice!) that does this. You'll use it like so:
http://myserver.myco.com/api/permissions/users?projectId=[project guid]&permission=issueadmin
In Web API interface use the "Show Internal API" checkbox at the top of the left column to see it.
just noticed in Sonarqube v6.7 it works as follows:
https://sonarqube.dhl.com/api/permissions/users?projectKey=<KEY>
https://sonarqube.dhl.com/api/permissions/users?projectKey=<KEY>&permission=issueadmin
https://sonarqube.dhl.com/api/permissions/users?projectKey=<KEY>&permission=issueadmin&permission=scan
All possible permissions are (reg. Browse, See Source Code, Administer Issues, Administer and Execute Analysis):
admin
codeviewer
issueadmin
scan
user

Amazon S3 and Cloudfront cache, how to clear cache or synchronize their cache

I have a cron job that runs every 10 minutes and updates the content-type and x-amz-meta. But since yesterday it seems like after the cron job run, Amazon is not picking up the changes made (refreshing his cache).
I even went and made the changes manually but no change...
When a video is uploaded it has a application/x-mp4 content-type and the cron job changes it to video/mp4.
Although S3 has the right content type video/mp4 cloudfront shows application/x-mp4(old content-type) ....
The cron job has been working for the last 6 months without a problem.
What is wrong with amazon caching? How can i synchronize the caching?
Use Invalidations to clear the cache, you can put the path to the files you want to clear, or simply use wild cards to clear everything.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html#invalidating-objects-api
This can also be done using the API!
http://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateInvalidation.html
The AWS PHP SDK now has the methods but if you want to use something lighter check out this library:
http://www.subchild.com/2010/09/17/amazon-cloudfront-php-invalidator/
user3305600's solution doesn't work as setting it to zero is the equivalent of Using the Origin Cache Headers.
As to the actual code
get your CloudFront distribution id
aws cloudfront list-distributions
Invalidate all files in the distribution, so CloudFront fetches fresh ones
aws cloudfront create-invalidation --distribution-id=S11A16G5KZMEQD --paths /
My actual full release script is
#!/usr/bin/env bash
BUCKET=mysite.com
SOURCE_DIR=dist/
export AWS_ACCESS_KEY_ID=xxxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxx
export AWS_DEFAULT_REGION=eu-west-1
echo "Building production"
if npm run build:prod ; then
echo "Build Successful"
else
echo "exiting.."
exit 1
fi
echo "Removing all files on bucket"
aws s3 rm s3://${BUCKET} --recursive
echo "Attempting to upload site .."
echo "Command: aws s3 sync $SOURCE_DIR s3://$BUCKET/"
aws s3 sync ${SOURCE_DIR} s3://${BUCKET}/
echo "S3 Upload complete"
echo "Invalidating cloudfrond distribution to get fresh cache"
aws cloudfront create-invalidation --distribution-id=S11A16G5KZMEQD --paths / --profile=myawsprofile
echo "Deployment complete"
References
http://docs.aws.amazon.com/cli/latest/reference/cloudfront/get-invalidation.html
http://docs.aws.amazon.com/cli/latest/reference/cloudfront/create-invalidation.html
Here is a manual way to invalidate the cache for all files on CloudFront via AWS
Open your CloudFront Distributions list, and click the desired distribution ID (circled in red in screenshot below) you want to clear it's cache.
Click 'Invalidations' tab (see selected tab highlighted in blue in the screenshot below).
Click 'Create invalidation' button (circled in red in the screenshot below)
Insert /* in the object paths input in order to clear cache of all files.
Click 'Create invalidation' button.
S3 is not used for real time development but if you really want to test your freshly deployed website use
http://yourdomain.com/index.html?v=2
http://yourdomain.com/init.js?v=2
Adding a version parameter in the end will stop using the cached version of the file and the browser will get a fresh copy of the file from the server bucket
Cloudfront will cache a file/object until the cache expiry. By default it is 24 hrs. If you have changed this to a large value, then it takes longer.
If you anytime needs to force clear the cache, use the invalidation. It is charged separately.
Another option is to change the URL (object key), so it fetches the new object always.
If you're looking for a minimal solution that invalidates the cache, this edited version of Dr Manhattan's solution should be sufficient. Note that I'm specifying the root / directory to indicate I want the whole site refreshed.
export AWS_ACCESS_KEY_ID=<Key>
export AWS_SECRET_ACCESS_KEY=<Secret>
export AWS_DEFAULT_REGION=eu-west-1
echo "Invalidating cloudfrond distribution to get fresh cache"
aws cloudfront create-invalidation --distribution-id=<distributionId> --paths / --profile=<awsprofile>
Region Codes can be found here
You'll also need to create a profile using the aws cli.
Use the aws configure --profile option. Below is an example snippet from Amazon.
$ aws configure --profile user2
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: text
(edit: Does not work)As of 2014, You can clear your cache whenever you want, Please go thorough the Documentation or
just go to your distribution settings>Behaviors>Edit
Object Caching Use (Origin Cache Headers)
Customize
Minimum TTL = 0
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html
I believe using * invalidate the entire cache in the distribution. I am trying at the moment, I would update it further
invalidate request screenshot
Update:
It worked as expected. Please note that you can invalidate the object you would like by specifying the object path.

How to hide TeamCity configuration for selected users?

I have one TeamCity project Dac.Test that contains 3 configurations: DEV, QA, PROD.
Also I have some users associated with their Roles. Is this possible to hide / show certains configurations for selected users or groups?
For example: Users associated with group: Testers can see QA configuration, but not PROD and DEV.
There is no way of managing user permissions per-build, this is available on a project level only. You could create a sub-project in the Dac.Test project to cater for this
If you're looking for a way of stopping people from mistakenly running this build, the following approach will work.
This method uses a prompt box that will pop up after you click the run button, it also needs input from the user confirming that they mean to run the build.
No one can run this build by accident
Go to your build configuration in the TeamCity UI
From here, go to Edit Configuration Settings --> Parameters --> Add new parameter
Enter something like 'Confirmation' as the parameter name
Then beside 'Spec:', click the 'Edit...' button
Set up the parameter as shown in the following screenshot:
You will now be prompted and asked for confirmation when you click the run button. The user will have to enter 'YES' in the prompt box that appears, any other value will stop the user from building:
This is best accomplished by using TeamCity's built-in role management. Roles allow you to set fine-grained permissions for users and groups. One potential issue, however, is that roles are scoped to projects (not build configurations). You'll need to create a separate Dac.Test QA project+configuration and provide your Testers the necessary privileges there. You'll also need to make sure that they are stripped of all privileges for the Dac.Test project.

tf.exe get for different user

I use the following command to get the latest version of a branch for a specific user (not the one running the process):
tf get $/MyProject/Development /version:WmyPC;otherUser /login:otherUser,otherPassword
Bt I keep getting:
The operation cannot be completed because the user (otherUser) does
not have one or more required permissions (Use) for workspace...
Any ideas?
By default, when you create a workspace it is a 'Private Workspace' - this means that the person who created it is the only person who can "use" it (which is why you get that specific error message).
What you will want to do is change the workspace to a 'Public Workspace' - this updates the permissions and allows multiple people to use the same workspace, but using their own credentials.
For more information, see my blog post TFS2010: Public Workspaces.
You are trying to get the files on your local machine with the credentials of someone else. It is not executing the TF under other credentials.
In other words, you still use the workspace mapping of yourself.
You need to use the RUNAS command to fullfil your task: http://social.msdn.microsoft.com/Forums/en-US/tfsversioncontrol/thread/20b6f678-4657-4b14-a114-5eeb232934e2/

Resources