How to download data with minio uri - minio

When I run simple kubeflow pipeline on minikube like first pod's output is second pod's input, data seems to be saved in minio. But I did not intend to use it. So to check the output data in minino, I got access to http://localhost:9000/ . Then I reached to login page.
When I run kubectl get secrets to find secret information, I could not find any minio secrets. Also minioadmin and minioadmin for Access Key and Secret Key did not work. How can I fetch data from minio uri?
I define the pipeline like this;
import kfp
import kfp.components as comp
from kfp.components import load_component_from_file
example_component1_op = load_component_from_file("./pipelines/components/example_component1/example_component1.yaml")
example_component2_op = load_component_from_file("./pipelines/components/example_component2/example_component2.yaml")
#kfp.dsl.pipeline(name='example_pipeline_20220820')
def example_pipeline():
example_component1_task = example_component1_op(
input_1='/app/input.txt',
input_2=8,
)
example_component2_task = example_component2_op(
input_1=example_component1_task.outputs['output_1'],
input_2=5,
)

I found the Access Key and the Secret Key.
Access Key: minio
Secret Key: minio123
Ref
https://github.com/kubeflow/pipelines/blob/master/developer_guide.md

Related

kubebuilder debug web-hooks locally

We have a kubebuilder controller which is working as expected, now we need to create a webhooks ,
I follow the tutorial
https://book.kubebuilder.io/reference/markers/webhook.html
and now I want to run & debug it locally,
however not sure what to do regard the certificate, is there a simple way to create it , any example will be very helpful.
BTW i've installed cert-manager and apply the following sample yaml but not sure what to do next ...
I need the simplest solution that I be able to run and debug the webhooks locally as Im doing already with the controller (Before using webhooks),
https://book.kubebuilder.io/cronjob-tutorial/running.html
Cert-manager
I've created the following inside my cluster
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: example-com
namespace: test
spec:
# Secret names are always required.
secretName: example-com-tls
# secretTemplate is optional. If set, these annotations and labels will be
# copied to the Secret named example-com-tls. These labels and annotations will
# be re-reconciled if the Certificate's secretTemplate changes. secretTemplate
# is also enforced, so relevant label and annotation changes on the Secret by a
# third party will be overwriten by cert-manager to match the secretTemplate.
secretTemplate:
annotations:
my-secret-annotation-1: "foo"
my-secret-annotation-2: "bar"
labels:
my-secret-label: foo
duration: 2160h # 90d
renewBefore: 360h # 15d
subject:
organizations:
- jetstack
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: example.com
isCA: false
privateKey:
algorithm: RSA
encoding: PKCS1
size: 2048
usages:
- server auth
- client auth
# At least one of a DNS Name, URI, or IP address is required.
dnsNames:
- example.com
- www.example.com
uris:
- spiffe://cluster.local/ns/sandbox/sa/example
ipAddresses:
- 192.168.0.5
# Issuer references are always required.
issuerRef:
name: ca-issuer
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
Still not sure how to sync it with the kubebuilder to work locally
as when I run the operator in debug mode I got the following error:
setup problem running manager {"error": "open /var/folders/vh/_418c55133sgjrwr7n0d7bl40000gn/T/k8s-webhook-server/serving-certs/tls.crt: no such file or directory"}
What I need is the simplest way to run webhooks locally
Let me walk you through the process from the start.
create webhook like it's said in the cronJob tutorial - kubebuilder create webhook --group batch --version v1 --kind CronJob --defaulting --programmatic-validation . This will create webhooks for implementing defaulting logics and validating logics.
Implement the logics as instructed - Implementing defaulting/validating webhooks
Install cert-manager. I find the easiest way to install is via this commmand - kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.1/cert-manager.yaml
Edit the config/default/kustomization.yaml file by uncommenting everything that have [WEBHOOK] or [CERTMANAGER] in their comments. Do the same for config/crd/kustomization.yaml file also.
Build Your Image locally using - make docker-build IMG=<some-registry>/<project-name>:tag. Now you dont need to docker-push your image to remote repository. If you are using kind cluster, You can directly load your local image to your specified kind cluster:
kind load docker-image <your-image-name>:tag --name <your-kind-cluster-name>
Now you can deploy it to your cluster by - make deploy IMG=<some-registry>/<project-name>:tag.
You can also run cluster locally using make run command. But, that's a little tricky if you have enabled webooks. I would suggest you running your cluster using KIND cluster in this way. Here, you don't need to worry about injecting certificates. cert-manager will do that for you. You can check out the /config/certmanager folder to figure out how this is functioning.

Deploy .sh file in ec2 using terraform

i am trying to deploy *.sh file located in my localhost to ec2,using terraform.Note that all infrastructure i am creating via terraform.So for copy file to the remote host i am using terraform provisioner.The question is,how i can find out a private_key or password for ubuntu-user for deploying.Or maybe somebody knows different solution.The goal to run .sh file in ec2.Thanks before hand)
If you want to do it using a provisioner and you have the private key local to where Terraform is being executed, then SCSI-9's solution should work well.
However, if you can't ensure the private key is available then you could always do something like how Elastic Beanstalk deploys and use S3 as an intermediary.
Something like this.
resource "aws_s3_bucket_object" "script" {
bucket = module.s3_bucket.bucket_name
key = regex("([^/]+$)", var.script_file)[0]
source = var.script_file
etag = filemd5(var.script_file)
}
resource "aws_instance" "this" {
depends_on = [aws_s3_bucket_object.script]
user_data = templatefile("${path.module}/.scripts/userdata.sh" {
s3_bucket = module.s3_bucket.bucket_name
object_key = aws_s3_bucket_object.script.id
}
...
}
And then somewhere in your userdata script, you can fetch the object from s3.
aws s3 cp s3://${s3_bucket}/${object_key} /some/path
Of course, you will also have to ensure that the instance has permissions to read from the s3 bucket, which you can do by attaching a role to the EC2 instance with the appropriate policy.

Handle credentials in a kubeflow's ContainerOp

I'm trying to run a kubeflow pipeline setup and I have several environements (dev, staging, prod).
In my pipeline I'm using kfp.components.func_to_container_op to get a pipeline task instance (ContainerOp), and then execute it with the appropriate arguments that allows it to integrate with my s3 bucket:
from utils.test import test
test_op = comp.func_to_container_op(test, base_image='my_image')
read_data_task = read_data_op(
bucket,
aws_key,
aws_pass,
)
arguments = {
'bucket': 's3',
'aws_key': 'key',
'aws_pass': 'pass',
}
kfp.Client().create_run_from_pipeline_func(pipeline, arguments=arguments)
Each one of the environments is using different credentials to connect to it and those credentials are being passed in the function:
def test(s3_bucket: str, aws_key: str, aws_pass: str):
....
s3_client = boto3.client('s3', aws_access_key_id=aws_key, aws_secret_access_key=aws_pass)
s3_client.upload_file(from_filename, bucket_name, to_filename)
so for each environment I need to update the arguments to contain the correct credentials and it makes it very hard to maintain since each time that I want to update from dev to stg to prod I can't simply copy the code.
My question is what is the best approach to pass those credentials?
Ideally you should push any env-specific configurations as close to the cluster as possible (as far away from components).
You can create Kubernetes secret in each environemnt with different creadentials. Then use that AWS secret in each task:
from kfp import aws
def my_pipeline():
...
conf = kfp.dsl.get_pipeline_conf()
conf.add_op_transformer(aws.use_aws_secret('aws-secret', 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY'))
Maybe boto3 can auto-load the credentials using the secret files and the environment variables.
At least all GCP libraries and utilities do that with GCP credentials.
P.S. It's better to create issues in the official repo: https://github.com/kubeflow/pipelines/issues

Aws Ruby SDK credentials from file

I would like to store my credentials in ~/.aws/credentials and not in environmental variables, but I am struggling.
To load the credentials I use (from here)
credentials = Aws::SharedCredentials.new({region: 'myregion', profile_name: 'myprofile'})
My ~/.aws/credentials is
[myprofile]
AWS_ACCESS_KEY = XXXXXXXXXXXXXXXXXXX
AWS_SECRET_KEY = YYYYYYYYYYYYYYYYYYYYYYYYYYY
My ~/.aws/config is
[myprofile]
output = json
region = myregion
I then define a resource with
aws = Aws::EC2::Resource.new(region: 'eu....', credentials: credentials)
but if I try for example
aws.instances.first
I get the error Error: #<Aws::Errors::MissingCredentialsError: unable to sign request without credentials set>
Everything works if I hard code the keys
According to the source code aws loads credentials automatically only from ENV.
You can create credentials with custom attributes.
credentials = Aws::Credentials.new(AWS_ACCESS_KEY, AWS_SECRET_KEY)
aws = Aws::EC2::Resource.new(region: 'eu-central-1', credentials: credentials)
In your specific case, it seems there is no way to pass custom credentials to SharedCredentials.
If you just do
credentials = Aws::SharedCredentials.new()
it loads the default profile. You should be able to load myprofile by passing in :profile_name as an option.
I don't know if you can also override the region though. You might want to try to loose that option, see how it works.

Flask with MongoDB using MongoKit to MongoLabs

I am a beginner and I have a simple application I have developed locally which uses mongodb with mongoKit as follows:
app = Flask(__name__)
app.config.from_object(__name__)
customerDB = MongoKit(app)
customerDB.register([CustomerModel])
then in views I just use the CustomerDB
I have put everything on heroku cloud but my database connection doesn't work.
I got the link I need to connect by:
heroku config | grep MONGOLAB_URI
but I am not sure how to pull this. I looked at the following post, but I am more confused
How can I use the mongolab add-on to Heroku from python?
Any help would be appreciated.
Thanks!
According to the documentation, Flask-MongoKit supports a set of configuration settings.
MONGODB_DATABASE
MONGODB_HOST
MONGODB_PORT
MONGODB_USERNAME
MONGODB_PASSWORD
The MONGOLAB_URI environment setting needs to be parsed to get each of these. We can use this answer to the question you linked to as a starting point.
import os
from urlparse import urlsplit
from flask import Flask
from flask_mongokit import MongoKit
app = Flask(__name__)
# Get the URL from the Heroku setting.
url = os.environ.get('MONGOLAB_URI', 'mongodb://localhost:27017/some_db_name')
# Parse it.
parsed - urlsplit(url)
# The database name comes from the path, minus the leading /.
app.config['MONGODB_DATABASE'] = parsed.path[1:]
if '#' in parsed.netloc:
# If there are authentication details, split the network locality.
auth, server = parsed.netloc.split('#')
# The username and password are in the first part, separated by a :.
app.config['MONGODB_USERNAME'], app.config['MONGODB_PASSWORD'] = auth.split(':')
else:
# Otherwise the whole thing is the host and port.
server = parsed.netloc
# Split whatever version of netloc we have left to get the host and port.
app.config['MONGODB_HOST'], app.config['MONGODB_PORT'] = server.split(':')
customerDB = MongoKit(app)

Resources