minio create and list a bucket with curl - minio

For automation, I want to create some standard buckets in minio, without having a minio client available in the specific environments, as not all machine have the necessary clients installed or maintained.
How would I call "bucket create" with curl? and for testing success how would I list the buckets?

If you do not want to use any of MinIO's SDKs or mc, you can take a look at https://czak.pl/2015/09/15/s3-rest-api-with-curl.html
The MinIO team is available on their public slack channel or by email to answer questions 24/7/365.

Related

Why is cloud.google.com/go/functions Golang package not able to List gen2 golang functions

I am trying to listFunctions via golang SDK using service account json .
The return iterator have information about gen1 functions but not gen2.
https://go.dev/play/p/ZwZ-jGscFCl - Please add service account json for running in local.
The same service account credentials are used for with gcloud binary. It is able to fetch all the functions.
Is there any issue with golang SDK?
I have 1 answer and 1 remark
Answer:
Use the v2 of the API in Golang and not the V1. The V1 does not support environment generation variable. Here the API spec of the V2. Look at the environment enum.
Since 2 weeks, the V2 API also supports gen1 functions. You can get all in the same time with the same library.
Remark:
You don't need and you shouldn't (mustn't) use service account key file locally. Your user account (with or without service account impersonation) is made for that. Look at my comments in that questions

How do I check that a Google Cloud service account has a particular permission programmatically?

I'm making an integration with a user-supplied GCS bucket. The user will give me a service account, and I want to verify that service account has object write permissions enabled to the bucket. I am failing to find documentation on a good way to do this. I expected there to be an easy way to check this in the GCS client library, but it doesn't seem as simple as myBucket.CanWrite(). What's the right way to do this? Do I need to have the bucket involved, or is there a way, given a service account json file, to just check that storage.objects.create exists on it?
IAM permissions can be granted at org, folder, project and resource (e.g. GCS Bucket) level. You will need to be careful that you check correctly.
For permissions granted explicitly to the bucket:
Use APIs Explorer to find Cloud Storage service
Use Cloud Storage API reference to find the method
Use BucketAccessControls:get to retrieve a member's (e.g. a Service Account's) permission (if any).
APIs Explorer used (sometimes) has code examples but, knowing the method, you can find the Go SDK.
The documentation includes a summary for ACLs using the List method, but I think you'll want to use Get (or equivalent).
NOTE I've not done this.
There doesn't appear to be a specific match to the underlying API's Get in the Go library.
From a Client, you can use Bucket method with a Bucket name to get a BucketHandle and then use the ACL method to retrieve the bucket's ACL (which should include the Service Account's email address and role, if any).
Or you can use the IAM method to get the bucket's IAM library's (!) Handle and then use the Policy method to get the resource's IAM Policy which will include the Service Account's email address and IAM role (if any).
Because of DazWilkin answer, you can check the permission at different level and it can be difficult to clearly know if an account as a permission.
For that, Google Cloud released a service: IAM troubleshooter. It's part of Policy Intelligence suite that help your to understand, analyse and troubleshoot the IAM permissions.
You have the API to call in the documentation.

Using Amazon S3 in place of an SFTP Server

I need to set up a repository where multiple people can go to drop off excel and csv files. I need a secure environment that has access control so customers logging on to drop off their own data can't see another customers data. So if person A logs on to drop a word document they can't see person B's excel sheet. I have an AWS account and would prefer to use S3 for this. I originally planned to setup an SFTP server on an EC2 server however, I feel that using S3 would be more scalable and safer after doing some research. However, I've never used S3 before nor have I seen it in a production environment. So my question really comes down to this does S3 provide a user interface that allows multiple people to drop files off similar to that of an FTP server? And can I create access control so people can't see other peoples data?
Here are the developer resources for S3
https://aws.amazon.com/developertools/Amazon-S3
Here are some pre-built widgets
http://codecanyon.net/search?utf8=%E2%9C%93&term=s3+bucket
Let us know your angle as we can provide other ideas knowing more about your requirements
Yes. It does, you can actually control access to your resources using IAM users and roles.
http://aws.amazon.com/iam/
You can allow privileges to parts of an S3 bucket say depending on the user or role for example:
mybucket/user1
mybucket/user2
mybucket/development
could all have different permissions.
Hope this helps.

What are the possible capabilities of IAM in AWS?

One of my clients wants to understand IAM feature before migrating business application to Amazon cloud.
I have figured out two use cases which we can recommend to our client, these are:
Resource-Level Permissions for EC2
• Allow users to act on a limited set of resources within a larger, multi-user EC2 environment.
• Control which users can terminate which instances.
• Restricting a user access to a single EC2 instance ( currently not supported by amazon API’s)
IAM Roles for Amazon ec2 resources
Command Line Usage
• Unix/Linux/Windows - Use the AWS Command Line Interface, which is a unified tool to manage the AWS services. We can access the Command Line Interface using the EC2 instance launched with IAM role support without specifying the credentials explicitly.
Programmatic Usage
• Use the appropriate AWS SDK for your language of choice. Configure it without specifying the credentials.
I would like to know other capabilities of IAM which we can recommend to our client and other use cases which you can recommend to us. Please let us know if any further explanation is required.
Any prompt response will be highly appreciated.
Thanks in advance
This is a very useful feature of AWS !
User Management - If you are a large team, you will have to give different users (or developers/testing, deployment) different type of permissions. Access levels like (say S3 read-only, DynamoDB full-access etc).
Manage Users : http://aws.amazon.com/iam/details/manage-users/
Not to keep credentials in code. Is you use IAM roles, you can mention that say an EC2 should work on this role. This will help you achieve things like "cluster with only access to S3, not DB")
IAM Roles for Amazon EC2 - Amazon Elastic Compute Cloud : http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Handle Release staging. This is a benefit from the ROLE. You move apps from dev, qa, staging and prod. I usually keep different accounts for this. In this case, if you configure the EC2 to run on roles, then the stage difference can be handled witout code change. Just move the build from one account to another, and it works with no risk!
Lot of other benefits;
Product Details : http://aws.amazon.com/iam/details/

How to use Service Accounts with gsutil, for uploading to CS + BigQuery

How do I upload data to Google BigQuery with gsutil, by using a Service Account I created in the Google APIs Console?
First I'm trying to upload data to Cloud Storage using gsutil, as that seems to be the recommended model. Everything works fine with gmail user approval, but it does not allow me to use a service account.
It seems I can use the Python API to get an access token using signed JWT credentials, but I would prefer using a command-line tool like gsutil with support for resumable uploads etc.
EDIT: I would like to use gsutil in a cron to upload files to Cloud Storage every night and then import them to BigQuery.
Any help or directions to go would be appreciated.
To extend #Mike answer, you'll need to
Download service account key file, and put it in e.g. /etc/backup-account.json
gcloud auth activate-service-account --key-file /etc/backup-account.json
And now all calls use said service account.
Google Cloud Storage just released a new version (3.26) of gsutil that supports service accounts (as well as a number of other features and bug fixes). If you already have gsutil installed you can get this version by running:
gsutil update
In brief, you can configure a service account by running:
gsutil config -e
See gsutil help config for more details about using the config command.
See gsutil help creds for information about the different flavors of credentials (and different use cases) that gsutil supports.
Mike Schwartz, Google Cloud Storage Team
Service accounts are generally used to identify applications but when using gsutil you're an interactive user and it's more natural to use your personal account. You can always associate your Google Cloud Storage resources with both your personal account and/or a service account (via access control lists or the developer console Team tab) so my advice would be to use your personal account with gsutil and then use a service account for your application.
First of all, you should be using the bq command line tool to interact with BigQuery from the command line. (Read about it here and download it here).
I agree with Marc that it's a good idea to use your personal credentials with both gsutil and bq, the bq command line tool supports the use of service accounts. The command to use service account auth might look something like this.
bq --service_account 1234567890#developer.gserviceaccount.com --service_account_credential_store keep_me_safe --service_account_private_key_file myfile.key query 'select count(*) from publicdata:samples.shakespeare'
Type bq --help for more info.
It's also pretty easy to use service accounts in your code via Python or Java. Here's a quick example using some code from the BigQuery Authorization guide.
import httplib2
from apiclient.discovery import build
from oauth2client.client import SignedJwtAssertionCredentials
# REPLACE WITH YOUR Project ID
PROJECT_NUMBER = 'XXXXXXXXXXX'
# REPLACE WITH THE SERVICE ACCOUNT EMAIL FROM GOOGLE DEV CONSOLE
SERVICE_ACCOUNT_EMAIL = 'XXXXX#developer.gserviceaccount.com'
f = file('key.p12', 'rb')
key = f.read()
f.close()
credentials = SignedJwtAssertionCredentials(
SERVICE_ACCOUNT_EMAIL,
key,
scope='https://www.googleapis.com/auth/bigquery')
http = httplib2.Http()
http = credentials.authorize(http)
service = build('bigquery', 'v2')
datasets = service.datasets()
response = datasets.list(projectId=PROJECT_NUMBER).execute(http)
print('Dataset list:\n')
for dataset in response['datasets']:
print("%s\n" % dataset['id'])
Posting as an answer, instead of a comment, based on Jonathan's request
Yes, an OAuth grant made by an individual user will no longer be valid if the user no longer exists. So, if you use the user-based flow with your personal account, your automated processes will fail if you leave the company.
We should support service accounts with gsutil, but don't yet.
You could do one of:
Probably add the feature quickly to
gsutil/oauth2_plugin/oauth2_helper.py using the existing python
oauth client implementation of service accounts
Retrieve the access token externally via the service account flow and store it in the cache location specified in ~/.boto (slightly hacky)
Create a role account yourself (via gmail.com or google apps) and grant permission to that account and use it for the OAuth flow.
We've filed the feature request to support service accounts for gsutil, and have some initial positive feedback from the team. (though can't give an ETA)
As of today you don’t need to run any command to setup a service account to be used with gsutil. All you have to do is to create ~/.boto with the following content:
[Credentials]
gs_service_key_file=/path/to/your/service-account.json
Edit: you can also tell gsutil where it should look for the .boto file by setting BOTO_CONFIG (docs).
For example, I use one service account per project with the following config, where /app is the path to my app directory:
.env:
BOTO_CONFIG=/app/.boto
.boto:
[Credentials]
gs_service_key_file=/app/service-account.json
script.sh:
export $(xargs < .env)
gsutil ...
In the script above, export $(xargs < .env) serves to load the .env file (source). It tells gsutil the location of the .boto file, which in turn tells it the location of the service account. When using the Google Cloud Python library you can do all of this with GOOGLE_APPLICATION_CREDENTIALS, but that’s not supported by gsutil.

Resources