minio bucket private but objects public - minio

We are using minio server on mac. We generate a presigned put url using node npm package and upload from a browser using a simple fetch call.
We want to keep the bucket private but inside objects (files) should be publicly available, How can we achieve this.
Right now we have kept the bucket public but with this it lists all the objects in the bucket which is not needed.

In s3 objects inherit parents permissions unless specified otherwise.
After uploading an object to bucket.
You can set permissions by using bucket policy and ACL, and example for listing several files public under a private bucket examplebucket. Buy it can be hard and inefficient to maintain lists of public items in a private bucket. You can always reverse the bucket policy and design policies that make files private.
{
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Action" : [ "s3:GetObject" ],
"Resource" : [
"arn:aws:s3:::examplebucket/design_info.doc",
"arn:aws:s3:::examplebucket/design_info_2.doc"
]
}
]
}
https://docs.min.io/docs/javascript-client-api-reference.html#setBucketPolicy

Related

Is it Possible to obtain User from Painless script when updating doc from Kibana?

Using Elastic Painless scripting, is it possible to get the user submitting a document update via the Kibana GUI?
Using ingest pipelines, I've tried to append the Security User to the context
{
"set_security_user": {
"field": "_security",
"properties": [
"roles",
"username",
"email",
"full_name",
"metadata",
"api_key",
"realm",
"authentication_type"
]
}
}
However regardless of which user is submitting a change to the document (via the Kibana GUI) it always sets it to:
...
"roles": [
"kibana_system",
"cloud-internal-enterprise_search-server"
],
"realm": {
"name": "found",
"type": "file"
},
"authentication_type": "REALM",
"username": "cloud-internal-enterprise_search-server"
...
Context:
What I'm trying to achieve is an additional layer of restrictions when users are modifying the Enterprise Search indexes. I want Developer Roles to be able to see all the configuration items within App Search (Enterprise Search via Kibana), but to only be able to read and not write. There doesn't seem to be a way to do this using the standard Enterprise Search roles which give Admins, Owners and Devs full read/write permissions for the engine.

How to execute a lambda function which copies objects from one S3 bucket to another via Step functions?

I was able to perform the task to copy data from the source bucket to a destination bucket using lambda function, however, I got an error while executing the lambda function in Step functions. Below are the steps I followed from the scratch.
Region chosen is ap-south-1
Created 2 buckets. Source bucket: start.bucket & Destination bucket: final.bucket
Created a Lambda function with the following information:
Author from scratch
Function name: CopyCopy
Runtime: Python 3.8
Had created a lambda IAM role: LambdaCopy and gave the necessary policies(S3 full access and Step functions full access) and attached it to the function.
Added a trigger and chose:
S3
Bucket: start.bucket
Event type: All object create events
I found a python code in GeeksforGeeks and applied in the code section.
import json
import boto3
s3_client=boto3.client('s3')
# lambda function to copy file from 1 s3 to another s3
def lambda_handler(event, context):
#specify source bucket
source_bucket_name=event['Records'][0]['s3']['bucket']['name']
#get object that has been uploaded
file_name=event['Records'][0]['s3']['object']['key']
#specify destination bucket
destination_bucket_name='final.bucket'
#specify from where file needs to be copied
copy_object={'Bucket':source_bucket_name,'Key':file_name}
#write copy statement
s3_client.copy_object(CopySource=copy_object,Bucket=destination_bucket_name,Key=file_name)
return {
'statusCode': 3000,
'body': json.dumps('File has been Successfully Copied')
}
- I deployed the code and it worked. Uploaded a csv file in start.bucket and it was copied to final.bucket.
Then, I created a State machine in Step functions with the following information:
Design your workflow visually
Type: Standard
Dragged the AWS Lambda between the Start and End state.
Changed its name to LambdaCopy
Integration type: Optimized
Under API Parameters, Function name(I chose the Lambda function that I had created): CopyCopy:$LATEST
Next State: End
Next and then again Next
State machine name: StepLambdaCopy
IAM Role: Create a new role (Later gave it S3 full access, Lambdafullaccess and Step function fullaccess too).
It showed error when I tried to execute it.
I know I am missing out on something. I would really appreciate the help.
Step functions now allows you to utilize the S3 Copy SDK directly completely bypassing the need for Lambda and boto3. Take a look here for more information.
So in your case you would need a simple task that looks like this:
{
"Comment": "A description of my state machine",
"StartAt": "CopyObject",
"States": {
"CopyObject": {
"Type": "Task",
"End": true,
"Parameters": {
"ServerSideEncryption": "AES256",
"Bucket.$": "$.destination_bucket",
"CopySource.$": "$.source_path",
"Key.$": "$.key"
},
"Resource": "arn:aws:states:::aws-sdk:s3:copyObject"
}
}
}
Then your input state will need to feed in the parameters you would normally use to copy a file with the copy command. Source Path, Destination Bucket, and Object Key exactly the same as the boto3 command.
Note: Your state machine IAM role will need direct S3 permissions and will need to be in the same region as the buckets.
It's always confusing what exactly you have to pass as parameters. Here is a template I use to copy the output of an Athena query. You can adapt it to your needs:
"athena_score": {
"Type": "Task",
"Resource": "arn:aws:states:::athena:startQueryExecution.sync",
"Parameters": {
"QueryExecutionContext": {
"Catalog": "${AthenaCatalog}",
"Database": "${AthenaDatabase}"
},
"QueryString": "SELECT ...",
"WorkGroup": "${AthenaWorkGroup}",
"ResultConfiguration": {
"OutputLocation": "s3://${BucketName}/${OutputPath}"
}
},
"TimeoutSeconds": 300,
"ResultPath": "$.responseBody",
"Next": "copy_csv"
},
"copy_csv": {
"Type": "Task",
"Resource": "arn:aws:states:::aws-sdk:s3:copyObject",
"Parameters": {
"Bucket": "${BucketName}",
"CopySource.$": "States.Format('/${BucketName}/${OutputPath}/{}.csv', $.responseBody.QueryExecution.QueryExecutionId)",
"Key": "${OutputPath}/latest.csv"
},
"ResultPath": "$.responseBody.CopyObject",
"Ent": "true"
}

How should I reference grandparent properties in a GraphQL query, where I don't define the intermediate resolver?

I am building a GraphQL Schema that has a Type Pod which contains 3 nested objects.
type Pod {
metadata: Metadata
spec: Spec
status: Status
}
My data source is an external API which returns an array of this data. In fact, I defined my schema around this API response. I have included a trimmed down version below.
[
{
"metadata": {
"name": "my-app-65",
"namespace": "default",
},
"spec": {
"containers": [
{
"name": "hello-world",
"image": "container-image.io/unique-id",
}
],
},
// more like-objects in the array
]
However, for each of these container objects, inside the array. I would like to add some extra information which this initial API call does not provide. However, I can query this information separately if I provide the name & namespace properties on the parent's metadata.
/container/endpoint/${namespace}/${name}
Returns...
[ {
name: "hello-world",
nestObj: {
//data
}
},
]
And I would like to add this nested Obj to the original space when that data is queried.
However, I don't have a clean way to access the pod.metadata.name inside the resolver for Container.
Currently my resolvers look like....
Query: {
pods: async () => {
//query that returns array of pod objects
return pods
}
},
Container: {
name: "hello-world",
nestedObj: async (parent, args, context, info) => {
//query that hits the second endpoint but requires name & namespace.
//however, I don't have access to those values.
}
}
Perfect world solution: I could access parent.parent.metadata.name inside the Container Resolver
Current Approach (brute force, repetitively add the property to the children)
Loop through every nested container objs in every Pod and add the podName & namespace as properties there.
pods.forEach(pod => pod.spec.containers.forEach(container => {
container.podName = pod.metadata.name;
container.namespace = pod.metadata.namespace;
}))
This feels very much like a hack, and really bogs down my query times. Especially considering this data won't always be requested.
I have two intuitive ideas but don't know how to implement them (I'm a bit new to graphQL).
Implement a Pod resolver that would then pass this data in through the rootValue as described here: https://github.com/graphql/graphql-js/issues/1098
Access it somewhere inside the info object
The problem the first one, is that my data source only sends me the data as an array, not individual pods and I'm unsure how to pass that array of data into resolvers for individual components.
The problem with second, is that object is very dense. I tried accessing it via the path. But path seems to only store the type, not the actual data.
It's also possible I'm just implementing this completely wrong and welcome such feedback.
Thanks for any guidance, suggestions, or resources.

Laravel / AWS S3 - How to validate a file being uploaded using a Pre-Signed URL

How does one validate files uploaded straight to S3 using a Pre-Signed URL?
Normally when uploading through Laravel, you can just validate the request using rules, such as:
[
'image' => 'required|file|mimes:jpeg,png,pdf|max:2500',
]
However when using a Pre-Signed URL, the file goes straight to the S3 storage and Laravel only receives a string for the path or URL to the file/image.
Is there a way to determine rules when creating the S3 Pre-Signed URL for them to only accept certain files? Or can I get the Laravel app to retrieve the file from S3 storage afterwards and validate it using the same Validation rules?
In terms of your web app, there are no additional parameters to limit file type and such (as of now). You could implement additional client-side validation (which could help but most likely won't solve the problem).
However, you could use S3 policy statements to limit file types and other things:
Allow the s3:PutObject action only for objects that have the extension of the file type that you want
Explicitly deny the s3:PutObject action for objects that don't have the extension of the file type that you want Note: You need this
explicit deny statement to apply the file-type requirement to users
with full access to your Amazon S3 resources. The following example
bucket policy allows the s3:PutObject action only for objects with
.jpg, .png, or .gif file extensions.
Important: For the first Principal value, list the Amazon Resource
Names (ARNs) of the users that you want to grant upload permissions
to. For the Resource and NotResource values, be sure to replace
bucket-name with the name of your bucket.
{
"Version": "2012-10-17",
"Id": "Policy1464968545158",
"Statement": [
{
"Sid": "Stmt1464968483619",
"Effect": "Allow",
"Principal": {
"AWS": "IAM-USER-ARN"
},
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::bucket-name/*.jpg",
"arn:aws:s3:::bucket-name/*.png",
"arn:aws:s3:::bucket-name/*.gif"
]
},
{
"Sid": "Stmt1464968483619",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"NotResource": [
"arn:aws:s3:::bucket-name/*.jpg",
"arn:aws:s3:::bucket-name/*.png",
"arn:aws:s3:::bucket-name/*.gif"
]
}
]
}
Another thing is to use CORS to specify the allowed origin.

Shopify StoreFront API GraphQL query returns nothing

I am trying for a client to build his Shopify store with Gatsby. I use for that gatsby-source-shopify2 plugin, and I always had error messages like that :
{
"errors": [
{
"message": "Cannot query field \"allShopifyProduct\" on type \"Query\".",
"locations": [
{
"line": 2,
"column": 3
}
]
}
]
}
So I investigated a bit to know what was happening, and I went on Shopify help center, followed this quick tuto and reproduced it with my client's store and with my own freshly created free test-store.
Here are the step I followed :
Create a new store, called 'my-store'
Create a new product
Create a new private app
Check the box Allow this app to access your storefront data using the Storefront API
Copy the API key
Double-check that the private app is checked in the Product Availability, just to be sure
Open GraphiQL, and set the GraphQL Endpoint to be https://my-store.myshopify.com/api/graphql
Set the only HTTP Header to be: X-Shopify-Storefront-Access-Token: <API key>
Then I typed in the query field :
{
shop {
name
}
}
And surprisingly, no error happened, but the expected output didn't come. It should have been :
{
"data": {
"shop": {
"name": "my-store",
}
}
}
I tried in gatsby too, and the same errors came again, obviously.
What is wrong with me ?
OK, my mistake, I went to fast : in the private app you have many keys : <API key>, <shared secret> and <API Storefront access token>. I used the <API key> instead of the <API Storefront access token>. Now everything is OK...

Resources