databricks init script UI is empty - user-interface

In the databricks cluster configuration UI, I am trying to add an init script. I have stored the script in the DBFS, however the init script UI has no drop down or obvious way to select this file. What am I missing? I have followed the instructions here:
https://learn.microsoft.com/en-us/azure/databricks/clusters/init-scripts#:~:text=Configure%20a%20cluster-scoped%20init%20script%20using%20the%20UI.,The%20path%20must%20...%205%20Click%20Add.%20
Every tutorial that I find shows a UI that includes a destination, path, and add button. My UI shows none of that.

init scripts must be added while the cluster is in edit mode. The cluster must be terminated to be able to edit settings. Once in edit mode, the init script UI is active.

There is no selector in the Cluster UI - it's just a normal input field where you need to type or paste the path to the init script.

Related

Issue setting Jenkins environment variables on EC2-Fleet

We are having issues setting the Jenkins environment variables on our dynamic EC2-Fleet.
We already have a fixed master (linux) and a fixed Windows slave but wanted to add slaves dynamically when the load on the system becomes heavy.
For this we created a Spot Request Instance in AWS spinning up linux machines from an AMI and control this via the EC2-fleet-plugin in Jenkins.
Before this EC2-fleet can be of any help, our jobs must be able to run on its nodes.
Most of our jobs use Jenkinsfiles and need certain environment variables to be set but the EC2-fleet-plugin does not provide the possibility to set environment variables (https://issues.jenkins-ci.org/browse/JENKINS-36544).
As suggested on this ticket (JENKINS-36544), we tried to set the environment variables in "System Configuration" for the dynamic ec2 slaves and set the environment variables for the other nodes on the "Node Configuration" overriding the "System Configuration", or so we thought.
This should work if this bug wouldn't exist: https://issues.jenkins-ci.org/browse/JENKINS-44425. Because of this bug the "System Configuration" overrides the "Node configuration" instead of vice versa. So we can't use this as the existing nodes would not have the correct environment variables anymore.
As a last resort we tried to set the environment variables on the dynamic ec2 slaves by creating an /etc/profile.d/jenkinsvars.sh on the AMI used by the Spot Request Instance.
This script would be automatically run on login system wide (https://help.ubuntu.com/community/EnvironmentVariables#A.2Fetc.2Fprofile.d.2F.2A.sh).
Next to that we also attempted to set them in /home/ubuntu/.profile on the AMI singling out the ubuntu user which is the user running the Jenkins agent (https://help.ubuntu.com/community/EnvironmentVariables#A.2BAH4-.2F.profile).
But it appears Jenkins does not use these environment variables but its own...
A way that works is to adapt the jobs to load a groovy file that's part of the AMI to set the environment variables we need but that would mean to change almost all jobs we have, next to all Jenkinsfiles that are included in our repositories (Bitbucket project).
We would like to avoid this....
Try the following strategy:
Leverage User Data to run a shell script when the Spot Instance launches. It is the primary recommended way by Amazon and the plugin authors.
Instead of saving variables into the environment, have the user data script save them into either /var/tmp or /etc/profile or in parameter store. Refer to answers in this SO question. If you want to encrypt your info use KMS parameter store, if you dont care use one of the others.Choose one of the answers to best fit your needs.
Alter your Jenkins job to pause until your user data script has completed running (refer to the documentation from the plugin)
Change your Jenkins job to pick up the variables from the location you chose in step 2.
Try restarting the server environment.
Just saying.
So we can't use this as the existing nodes would not have the correct environment variables anymore.
Update your existing nodes to load the environment variables when they are provisioned / started, then remove them from the System configuration, then add them to the Node configuration.
You could also try setting the Slave command prefix field to ENV_VAR1=val1 ENV_VAR2=val2, although I haven't tried that.
Thirdly, you can try putting your variables directly into /etc/profile which should always be loaded no matter what user you are logging in as.
However, the easiest by far is to make all of your drones/agents exactly the same and set your environment variables in whatever scripts you run to build your projects. Use docker to pull dependencies to the agents as necessary during the job and to set up specific environments for your applications. This greatly simplifies the maintenance and configuration of your agents.
The Jenkins version or version of the EC2 plugin are missing in the question, but according to the description in this merged pull request, this bug should be fixed now: https://github.com/jenkinsci/ec2-plugin/pull/440#issuecomment-597160730
Jenkins version:
so this change works in both <=2.204, and >=2.205
EC2 plugin version: >=ec2-1.50
JENKINS-36544 - Fix Node Properties on Jenkins 2.205+ (#440) #jhansche
From the Pull Request description:
Navigate to the cloud configuration screen (this moves to a new page >=2.205)
Click "Add a new cloud"
Click "Amazon EC2"
Under the "AMIs" section, click "Add"
At the bottom of the AMI block, expand "Advanced"
Expect to see "Node Properties" block at the bottom of the block
Node Properties has the Environment variables.

use of multiple custom scripts in metadata parameter of the LaunchInstanceDetails method in oci.core.model

I understand that the LaunchInstanceDetails method in oci.core.model has a parameter -> metadata , wherein one of the metadata key-names that can be used to provide information to Cloud-Init is -> “user_data” , which can be used to run custom scripts by Could-Init when provided in a base64-encoded format.
In my Python code to create a Windows VM,while launching the instance, I have a requirement to run 2 custom scripts:
Script to login to Windows machine via RDP – this is absolute(needs to be executed every time a new Windows VM is created without fail) – Currently , we have included this in the metadata parameter while launching the instance, as below:
instance_metadata['user_data'] = oci.util.file_content_as_launch_instance_user_data(path_init)
Bootstrap script to Install Chef during the initialization tasks - this is conditional ( this needs to run only if the user wishes to Install Chef and we internally handle it by means of a flag in the code) – Yet to be implemented as we need to identify if more than one custom script (conditional in this case) can be included.
Can someone help me understand if and how we can achieve to include multiple scripts(keeping in mind the conditional clause) in a single metadata variable or can we have multiple metadata or some other parameter in this service that could be utilised to run the Chef Installation script
I'd suggest combining these into a single script and using the conditional in a if statement to install Chef as required.

Configuring settings for last paricipant support wsadmin/websphere

Recently i've came to an issue to configure Last Participant Support on deployed application. I've found some old post about that:
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014090728
On server itself i found how to do it. But with jython or wsadmin commands im not able to find how to do it on application itself.
But it does not help for me. Any ideas?
There is no command assistance available for the action of changing last participant support from the admin console which typically implies there is no scripting command associated the action. And there doesn't appear to be an wsadmin AdminApp command to modify the property. Looking at config repo changes made as a result of the admin console action, the IBM Programming Model Extensions (PME) deployment descriptor file "ibm-application-ext-pme.xmi" for an application is created/modified by the action.
If possible, the best long-term solution would be to use a tool like RAD to generate that extensions file when packaging the application because if you need to redeploy the app, your config changes wouldn't get overridden. If you can't mod the app, you can script the addition of an PME descriptor file in each of the desired apps with the knowledge that redeploying the app will overwrite your changes. The changes can be made by doing something along the lines of:
1) create a text file named ibm-application-ext-pme.xmi with contents similar to this:
<pmeext:PMEApplicationExtension xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:pmeext="http://www.ibm.com/websphere/appserver/schemas/5.0/pmeext.xmi" xmi:id="PMEApplicationExtension_1559836881290">
<lastParticipantSupportExtension xmi:id="LastParticipantSupportExtension_1559836881292" acceptHeuristicHazard="false"/>
</pmeext:PMEApplicationExtension>
2) in wsadmin or your jython script do the following (note in this example the xmi file you created is in the current directory, if not, include the full path to it in the createDocument command) :
deployUri = "cells/<your_cell_name>/applications/<your_app_name>.ear/deployments/<your_app_name>/META-INF/ibm-application-ext-pme.xmi"
AdminConfig.createDocument(deployUri, "ibm-application-ext-pme.xmi")
AdminConfig.save()
3) restart the server

how to schedule the shell script using Google Cloud Shell?

I have a .sh file that is stored in GCS. I am trying to schedule the .sh file through google cloud shell.
I can run the same file using gsutil cat gs://miptestauto/baby.sh | sh command but not able to schedule it.
Following is my code for scheduling the file:
16 17 * * * gsutil cat gs://miptestauto/baby.sh | sh
It displays the message as "auto saving..done" but the scheduled job is not get displayed when I use crontab -l
# contents of .sh file
bin/bash
bq load --source_format=CSV babynames.baby_destination13 gs://testauto/yob2010.txt name:string,gender:string,count:integer
Please can anyone tell me how schedule it using google cloud shell.
I am not using compute engine/app engine. Just wanted to schedule it using the cloud shell.
thank you in advance :)
As per the documentation, Cloud Shell is intended for interactive use only. The Cloud Shell instances are provisioned on a per-user, per-session basis and sessions are terminated after an hour of inactivity.
In order to schedule a daily cron job, the instance needs to be up and running all time but this doesn’t happen with Cloud Shell and I believe your jobs are not running because of this.
When you start Cloud Shell, it provisions a f1-micro instance which is the same machine type you can get for free if you are eligible for “Always Free”. Therefore you can create a f1-micro instance, configure the cron job on it and leave it running so it can execute the daily job.
You can check free usage limits at https://cloud.google.com/compute/pricing#freeusage
You can also use the Cloud Scheduler product https://cloud.google.com/scheduler which is a serverless managed Cron like scheduler.
To schedule a script you first have to create a project if you don’t have one. I assume you already have a project so if that’s the case just create the instance that you want for scheduling this script.
To create the new instance:
At the Google Cloud Platform Console click on Products & Services which is the icon with the four bars at the top left hand corner.
On the menu go to the Compute section and hover on Compute Engine and then click on VM Instances.
Go to the menu bar above the instance section and there you will see a Create Instance button. Click it and fill in the configuration values that you want your new instance to have. The values that you select will determine your VM instance features. You can choose, among other values, the name, zone and machine type for your new instance.
In the Machine type section click the drop-down menu tab to select an “f1-micro instance”.
In the Identity and API access section, give access scope to the Storage API so that you can read and write to your bucket in case you need to do so; the default access scope only allows you to read. Also enable BigQuery API.
Once you have the instance created and access to the bucket, just create your cron job inside your new instance: In the user account under which the cron job will execute, run crontab -e and edit this file to run the cron job that will execute your baby.sh script.The following documentation link should help you with this.
Please note, if you want to view output from your script you may need to redirect it to your current terminal.

Debugging elastic search code with IntelliJ

I am trying to debug the Elasticsearch source code with IntelliJ. I built the source with IntelliJ and the current Program argument is start. I tried passing the parameters necessary to create an index in the program argument section but it doesn't seem to work. Where do I need to pass the parameters to create indices or perform other operations?
To being testing/debugging requests, you will need to start the server by adding the start command as a Program argument. Once the server has started, open up a terminal and provide curl commands. You can place break points in the code to view the workflows involved.

Resources