How do I log parameters from custom containers in my Vertex AI pipeline? - google-cloud-vertex-ai

When using custom containers to train models in my pipeline, I want a way to log parameters. I don't see this documented yet.

I want a way to log parameters
Do you mean to output parameters from a custom container component?
Here's the latest doc for creating custom container component: https://www.kubeflow.org/docs/components/pipelines/v2/author-a-pipeline/components/#3-custom-container-components

Related

Terraform : How to fetch or destroy resources created by other means?

Sometimes I end up creating resources using AWS console due to some errors in Terraform or for lack of time. Can I list all my resources and destroy them? Basically a discovery of existing cloud resources and management of such ?
Ex: list my EC2 instances using Terraform and destroy when needed . How to achieve this?
Terraform is designed to ignore any existing objects that it didn't create because otherwise it would be risky to adopt Terraform an existing system with many existing objects and it would be impossible to decompose the infrastructure into different configurations for each subsystem without each one trying to destroy the objects being managed by the others.
Terraform doesn't have any facility for automatically detecting objects created outside of Terraform, but you can explicitly bind specific objects from your remote system to resource instances in your Terraform configuration using the terraform import command.
That command has some safeguards to try to prevent accidentally immediately deleting an object you've just imported if e.g. you make a typo of the resource instance address, and so unfortunately the design of this command is contrary to your goal: it won't let you just import something and run terraform apply to destroy it.
Instead, you'd need to:
Write a stub empty resource block for a resource of the appropriate type in your configuration.
Run terraform import to bind your existing real object to that empty resource block.
After the import succeeds, immediately remove the resource block to tell Terraform that you intend to delete the object.
Run terraform apply, and then Terraform should notice that it's tracking an object that is no longer mentioned in the configuration and propose to delete it.
Terraform is not the best tool for this job because it has essentially been designed to do the exact opposite of what you want to do, because typically users want to avoid destroying untracked objects to avoid disrupting neighboring systems.
However, you may be able to get the effect you want with some custom programming on your part, by writing a program that does something like the following:
Run terraform show -json in all of your configuration working directories to obtain a machine-readable description of the Terraform state in each one.
Decode the JSON state descriptions to find all of the resource instances of type aws_instance and collect a set of all of their id attribute values. This is the set of instances to keep.
Call the EC2 API DescribeInstances action to retrieve a list of all of the instances that actually exist. Collect a set of all of their IDs. This is the set of instances that exist.
Set-subtract the set of instances to keep from the set of instances that exist. The result is the set of instances to destroy.
If the set of instances to destroy isn't empty, call the EC2 API's TerminateInstances action to terminate every instance ID in that set.
This description is specific to Amazon EC2 instances. The same pattern could apply to objects of any other type, but there is no general solution that will work across all object types at once because the AWS API design doesn't work that way: each object type has its own separate operations for querying which objects exist and for destroying a particular object or set of objects.

Define a Multi-Stage-Environment UI (Angular) in Kubernetes

A question regarding a multi-stage-environment in Kubernetes.
I got a dev,test,prod K8-Cluster, and I got environment variables that differ from stage to stage (like Backend-urls).
I was thinking of using the init-container for replacing the backend-urls per stage, so it's not hardcoded and you can change the urls, if something changes.
Is this an anti-pattern or would you just pack the backends together with the frontend (which is not really possible because we sometimes got more than one different backend-url)
you should use configmaps to set the environment variables
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
example for angular:
Configmaps - Angular

Jenkins Templates

I am new to the Cloudbees Enterprise of Jenkins and to the concept of "templates".
I am trying to define a new template and this template will be used by 20-30 number of jobs. The job is a basic build job. After the build, I would like to have code analysis plugin. How can I define it in the Jenkins Template.
I can define it while creating a direct job in "Post Build Actions" but not sure how to define the same in a template.
Do you have any solutions/ suggestions ?
Cloudbees templates plugin is very powerful but not easy to master. Creation and administration of templates is not as user-friendly as one wishes it to be. ;)
For an introduction I recommend reading the following resources:
Basic concept
Template documentation
Tutorial for simple job template
Make sure you understand the difference between builder template and job template. I assume you want to create a number of jobs using the job template. Follow these steps:
First of all create a normal job that contains all actions you want to perform by all templated jobs.
Make sure this job works as expected for one example configuration.
Now create a new job template:
You will need to decide, which parts of the job configuration need to be adapted to each jobs configuration (e.g. source code repository). Create an parameter for each configuration option.
You might want to do some pre-processing on the job templates parameters using some transformation script - but we skip that for now.
Now you need to add an XML description of what the generated job should do. I recommend copying this XML description from our example job created in step #1. You can access it via this URL: http://your-jenkins/job/this-job/config.xml. Simply copy&paste the XML code in the browser. Newer Jenkins versions also allow to read the jobs XML configuration via the user interface.
Finally you need to fill in the templates arguments within the XML configuration. Simply replace the specific (hard-coded) values by a reference to the name of the templates parameters created before: ${param_name}
Save the template
Now create a new job. On the job creation screen you should be able to select your newly created job template as job type. After creating a job of the templates type you can define all template parameters for this specific job.
Try to run the template-based job and make sure it works as expected.
Create more template-based jobs as needed.
All template-based jobs share the build steps defined by the job template. If you change the job template later on, all depending jobs are updated accordingly. This is a very efficient way to administrate a large number of similar jobs. It is very much worth the effort. Good luck!

How to update properties of NiFi template programatically (rest-api?)

I have NiFi template exported as xml. I am using rest-api to upload template to a NiFi instance. Now, I want to update/add some properties (say, password) of the template from rest-api (or any other option available, programatically).
I read the docs and various community threads without success. Referred links:
How to set props of processor
Update nifi flow on the fly
Open for any approach,
Thanks
I think there is a bit of confusion in your wording. Correct me if I'm wrong but I believe what you want to do is:
Create a template in one location
Export it
Upload it to another NiFi instance
Add the template to the canvas (so now it's just components on your NiFi canvas)
Edit the properties of the components that were added
There are generally two different reasons you would want to edit the properties after importing a template: the properties are specific to the instance you're running on; they were sensitive properties.
With the addition of the "variable registry" in NiFi-0.7.0 you can have multiple files that at NiFi's start-up are read in to give custom variables to use. Here is a section about it in the NiFi docs. This allows you to have custom variables to reference via Expression Language (EL) specific to each environment you run on.
The "variable registry" doesn't help for the sensitive properties though, because the EL used to reference them doesn't get exported with the template (since the property is sensitive). You will need to use the rest-api to update the processor properties explicitly. The NiFi docs give the exact call to use to update a processor (under Processors -> Put). Upgrading the variable registry to work securely is on the NiFi roadmap.
If I was completely off and you simply want to modify a template after importing it into a NiFi instance. You would have to add the template to your graph, delete the template from the listing, re-create it using the components on your graph. After templates are imported/created they are immutable.

Is there a way to import/export tasks from different CQ instances?

I have two instances of CQ and between them I want to be able to import/export tasks.
For example:
On instance 1 I can see all tasks by going to http://instance1/libs/cq/taskmanagement/content/taskmanager.html#/tasks/Delta
On instance 2 I can see all tasks by going to http://instance2/libs/cq/taskmanagement/content/taskmanager.html#/tasks/Delta
There might be some scenarios where I want to take all tasks from instance2 and add them as additional tasks into instance1 (on top of the tasks it may already have).
Is this possible to do?
Yes, you can do this with Package Manager. The tasks are stored as nodes in the JCR repository, so you can create a package that filters the task nodes you want to migrate from one instance to another. For example, you could define a package with this filter definition to include all tasks:
/etc/taskmanagement/tasks
If you don't want all tasks, you may need to define the filter(s) more narrowly to pick only the ones you want to include.
For example:
/etc/taskmanagement/tasks/2015-05-04/Delta/TheTaskYouWantToMigrate
Use the browser when defining the filter to find the tasks you want to include.
See Working with Packages for details on using the Package Manager. This Tutorial also shows how to create the package and add filters. Once you've created a package with the filters for the tasks you want to include, then build the package and download it. On your other instance upload the package you built and install it. You will then see the tasks one your first instance replicated onto the second instance.
Additionally to what Shawn said, you also can use replication mechanisms to do the work for you, and replicate the desired nodes between any two instances.

Resources