Not able to create task from xl-deploy cli - xl-deploy

I want to trying to deploy the dar file using cli. I had set up the cli on one of our build machine.
I have gone through the document as well(https://docs.xebialabs.com/xl-deploy/4.5.x/climanual.html). But when I am running the below code. I am getting an error on the step where task is getting created.
# Import package
deployit> package = deployit.importPackage('demo-application/1.0')
# Load environment
deployit> environment = repository.read('Environments/DiscoveredEnv')
# Start deployment
deployit> deploymentRef = deployment.prepareInitial(package.id, environment.id)
deployit> deploymentRef = deployment.generateAllDeployeds(deploymentRef)
deployit> taskID = deployment.deploy(deploymentRef).id
deployit> deployit.startTaskAndWait(taskID)
Error:
javax.ws.rs.ProcessingException: com.thoughtworks.xstream.converters.ConversionException:
---- Debugging information ----
cause-exception : java.lang.NullPointerException
cause-message : Name is null
class : com.xebialabs.deployit.engine.api.execution.SerializableTask
required-type : com.xebialabs.deployit.engine.api.execution.SerializableTask
converter-type : com.xebialabs.deployit.booter.remote.xml.TaskConverterSelector
path : /task
line number : 1
version : not available
-------------------------------
How can I fix this issue?

Here is an Example in which you can start deployment task, check each step status and print logs for Failed ones,
# Load package
package = repository.read('Applications/TestApps/1.0')
# Load environment
environment = repository.read('Environments/TestingEnv')
# Start deployment
deploymentRef = deployment.prepareInitial(package.id, environment.id)
depl = deployment.prepareAutoDeployeds(deploymentRef)
task = deployment.createDeployTask(depl)
deployit.startTaskAndWait(task.id)
# Check on deployment errors
steplist = tasks.steps(task.id)
for s in steplist.steps:
print(' Step: ' + s.description)
print(' Status: ' + str(s.state))
if str(s.state) == 'FAILED':
print('ERROR ' + s.log)
Also you can have a look on XLDeploy log file located at XLDeploy server under "XLD_INSTALL_HOME/log/deployit.log" to get more details in case of errors.

Related

PowerShell DSC resource MSFT_PackageResource failed: The return code 1618 was not expected. Configuration is likely not correct

I have exe file downloaded in the VM in specific folder, I am trying to install Adobe using Powershell dsc code.
Script is failing with below error during execution (Configuration is called through ARM template), however if I check inside the VM, adobe is installed.
Tried running the same script manually inside the VM. Not facing any error though.
[{"code":"VMExtensionProvisioningError","message":"VM has reported a failure when processing extension 'configureWindowsServer'. Error message: "DSC Configuration 'Adobe' completed with error(s). Following are the first few: PowerShell DSC resource MSFT_PackageResource failed to execute Set-TargetResource functionality with error message: The return code 1618 was not expected. Configuration is likely not correct The SendConfigurationApply function did not succeed."\r\n\r\nMore information on troubleshooting is available at https://aka.ms/VMExtensionDSCWindowsTroubleshoot "}]}
Configuration Adobe
{
$PackagesFolder = "C:\Packages\Adobe"
$AcrobatReader = #{
"Name" = "Adobe Acrobat Reader DC"
"ProductId" = "XXXXXX-XXXXX-XXXXXXX"
"Installer" = "AcroRdrDC.exe"
"FileHash" = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"HashAlgorithm" = "SHA256"
"DestinationPath" = "$PackagesFolder\AdobeAcrobatReaderDC"
"Arguments" = "/msi EULA_ACCEPT=YES /qn"
}
Package AdobeAcrobatReaderDC {
Ensure = "Present"
Name = $AcrobatReader.Name
ProductId = $AcrobatReader.ProductId
Path = ("{0}\{1}" -f $AcrobatReader.DestinationPath, $AcrobatReader.Installer)
Arguments = $AcrobatReader.Arguments
}
}

Google automl_v1beta1 error "the provided location ID is not valid"

I am trying to call trained model from google colab with example provided.
But there is an error.
Who knows is it beta error or I have not set somethoing properly?
Thanks in advance.
The code
from google.cloud import automl_v1beta1 as automl
automl_client = automl.AutoMlClient()
# Create client for prediction service.
prediction_client =
automl.PredictionServiceClient().from_service_account_json(
'XXXXX.json')
# Get the full path of the model.
model_full_id = automl_client.model_path(
project_id, compute_region, model_id
)
# Read the file content for prediction.
#with open(file_path, "rb") as content_file:
snippet = "fsfsf" #content_file.read()
# Set the payload by giving the content and type of the file.
payload = {"text_snippet": {"content": snippet, "mime_type": "text/plain"}}
# params is additional domain-specific parameters.
# currently there is no additional parameters supported.
params = {}
response = prediction_client.predict(model_full_id, payload, params)
print("Prediction results:")
for result in response.payload:
print("Predicted class name: {}".format(result.display_name))
print("Predicted class score: {}".format(result.classification.score))
The eror msg^
InvalidArgument: 400 List of found errors: 1.Field: name; Message: The provided location ID is not valid.
You have to use a region that supports AutoML beta. This works for me:
create_dataset("myproj-123456", "us-central1", "my_dataset_id", "en", "de")
I clone the repo "python-docs-samples" :
$ git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
I navigate to the automl examples
$ cd /home/MY_USER/python-docs-samples/language/automl/
I set the environment variables for [1]:
GOOGLE_APPLICATION_CREDENTIALS
PROJECT_ID
REGION_NAME
I typed:
$ python automl_natural_language_dataset.py create_dataset automltest1 False
I got this message:
Dataset name: projects/198768927566/locations/us-central1/datasets/TCN7889001684301386365
Dataset id: TCN7889001684301386365
Dataset display name: automltest1
Text classification dataset metadata:
classification_type: MULTICLASS
Dataset example count: 0
Dataset create time:
seconds: 1569367227
nanos: 873147000
I set the environment variable for :
DATASET_ID
Please note that I got this for the step 5.
I typed:
python automl_natural_language_dataset.py import_data $DATASET_ID "gs://$PROJECT_ID-lcm/complaints_manual.csv"
I got this message:
Processing import...
Dataset imported.

getting error while Databricks connection to Azure SQL DB with ActiveDirectoryPassword

I am trying to connect Azure sql db from Databricks with AAD - Password auth. I imported azure sql db& adal4j libs. but still getting below error
java.lang.NoClassDefFoundError: com/nimbusds/oauth2/sdk/AuthorizationGrant
stack trace:
at com.microsoft.sqlserver.jdbc.SQLServerADAL4JUtils.getSqlFedAuthToken(SQLServerADAL4JUtils.java:24)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.getFedAuthToken(SQLServerConnection.java:3609)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.onFedAuthInfo(SQLServerConnection.java:3580)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.processFedAuthInfo(SQLServerConnection.java:3548)
at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onFedAuthInfo(tdsparser.java:261)
at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:103)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.sendLogon(SQLServerConnection.java:4290)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:3157)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.access$100(SQLServerConnection.java:82)
at com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:3121)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2026)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1687)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1528)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:866)
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:569)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:63)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:115)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:5
at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:590)
at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:474)
at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:548)
at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:380)
at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:327)
at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:215)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.nimbusds.oauth2.sdk.AuthorizationGrant
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
imported nimbusds lib into my workspace.
here is config
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
import org.apache.spark.sql.SparkSession
val spark: SparkSession = SparkSession.builder().getOrCreate()
val config = Config(Map(
"url" -> "ServerName.database.windows.net",
"databaseName" -> "dbname",
"dbTable" -> "dbo.test",
"user" -> "alias#domain.com",
"password" -> "pwd",
"authentication" -> "ActiveDirectoryPassword",
"encrypt" -> "true",
"trustServerCertificate"->"false",
"hostNameInCertificate"->"*.database.windows.net"
))
val collection = spark.read.sqlDB(config)
collection.show()
please help me if any one resolved this issue.
Click here to download a working notebook.
Create a Databricks Cluster
Known working configuration - Databricks Runtime 5.2 (includes Apache Spark 2.4.0, Scala 2.11)
Install the Spark Connector for Microsoft Azure SQL Database and SQL Server
Navigate to Cluster > Library > Install New > Maven > Search Packages
Switch to Maven Central
Search for azure-sqldb-spark (com.microsoft.azure:azure-sqldb-spark)
Click Select
Click Install
Known working version - com.microsoft.azure:azure-sqldb-spark:1.0.2
Update Variables
Update variable values (custerName, server, database, table, username, password)
Run the Initialisation command (ONCE ONLY)
This will do the following:
Create a folder called init under dbfs:/databricks/init/
Creta a sub-folder with the name of the Databricks cluster
Create a bash script per dependency
Bash Script Commands:
* wget: Retrieve content from a web server
* --quit: Turns off wget's output
* -O: Output
Dependencies:
http://central.maven.org/maven2/com/microsoft/azure/adal4j/1.6.0/adal4j-1.6.0.jar
http://central.maven.org/maven2/com/nimbusds/oauth2-oidc-sdk/5.24.1/oauth2-oidc-sdk-5.24.1.jar
http://central.maven.org/maven2/net/minidev/json-smart/1.1.1/json-smart-1.1.1.jar
http://central.maven.org/maven2/com/nimbusds/nimbus-jose-jwt/7.0.1/nimbus-jose-jwt-7.0.1.jar
Restart the Databricks Cluster
This is needed to execute the init script.
Run the last cell in this Notebook
This will test the ability to connect to an Azure SQL Database via Active Directory authentication.
Init Command
// Initialisation
// This code block only needs to be run once to create the init script for the cluster (file remains on restart)
// Get the cluster name
var clusterName = dbutils.widgets.get("cluster")
// Create dbfs:/databricks/init/ if it doesn’t exist.
dbutils.fs.mkdirs("dbfs:/databricks/init/")
// Create a directory named (clusterName) using Databricks File System - DBFS.
dbutils.fs.mkdirs(s"dbfs:/databricks/init/$clusterName/")
// Create the adal4j script.
dbutils.fs.put(s"/databricks/init/$clusterName/adal4j-install.sh","""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/adal4j-1.6.0.jar http://central.maven.org/maven2/com/microsoft/azure/adal4j/1.6.0/adal4j-1.6.0.jar
wget --quiet -O /mnt/jars/driver-daemon/adal4j-1.6.0.jar http://central.maven.org/maven2/com/microsoft/azure/adal4j/1.6.0/adal4j-1.6.0.jar""", true)
// Create the oauth2 script.
dbutils.fs.put(s"/databricks/init/$clusterName/oauth2-install.sh","""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/oauth2-oidc-sdk-5.24.1.jar http://central.maven.org/maven2/com/nimbusds/oauth2-oidc-sdk/5.24.1/oauth2-oidc-sdk-5.24.1.jar
wget --quiet -O /mnt/jars/driver-daemon/oauth2-oidc-sdk-5.24.1.jar http://central.maven.org/maven2/com/nimbusds/oauth2-oidc-sdk/5.24.1/oauth2-oidc-sdk-5.24.1.jar""", true)
// Create the json script.
dbutils.fs.put(s"/databricks/init/$clusterName/json-smart-install.sh","""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/json-smart-1.1.1.jar http://central.maven.org/maven2/net/minidev/json-smart/1.1.1/json-smart-1.1.1.jar
wget --quiet -O /mnt/jars/driver-daemon/json-smart-1.1.1.jar http://central.maven.org/maven2/net/minidev/json-smart/1.1.1/json-smart-1.1.1.jar""", true)
// Create the jwt script.
dbutils.fs.put(s"/databricks/init/$clusterName/jwt-install.sh","""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/nimbus-jose-jwt-7.0.1.jar http://central.maven.org/maven2/com/nimbusds/nimbus-jose-jwt/7.0.1/nimbus-jose-jwt-7.0.1.jar
wget --quiet -O /mnt/jars/driver-daemon/nimbus-jose-jwt-7.0.1.jar http://central.maven.org/maven2/com/nimbusds/nimbus-jose-jwt/7.0.1/nimbus-jose-jwt-7.0.1.jar""", true)
// Check that the cluster-specific init script exists.
display(dbutils.fs.ls(s"dbfs:/databricks/init/$clusterName/"))
Test Command
// Connect to Azure SQL Database via Active Directory Password Authentication
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
// Get Widget Values
var server = dbutils.widgets.get("server")
var database = dbutils.widgets.get("database")
var table = dbutils.widgets.get("table")
var username = dbutils.widgets.get("user")
var password = dbutils.widgets.get("password")
val config = Config(Map(
"url" -> s"$server.database.windows.net",
"databaseName" -> s"$database",
"dbTable" -> s"$table",
"user" -> s"$username",
"password" -> s"$password",
"authentication" -> "ActiveDirectoryPassword",
"encrypt" -> "true",
"ServerCertificate" -> "false",
"hostNameInCertificate" -> "*.database.windows.net"
))
val collection = sqlContext.read.sqlDB(config)
collection.show()
As a 2020 update:
I did cluster init scripts as described, but in the end my working setup didn't seem to require that.
I ended up using scala 2.11 (note 2.11) and these libraries installed through UI: com.microsoft.azure:azure-sqldb-spark:1.0.2 and mssql_jdbc_8_2_2_jre8.jar (note jre8). Also I had to explicitly mention the driver class in config:
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
val config = Config(Map(
"url" -> "....database.windows.net",
"databaseName" -> "...",
"dbTable" -> "...",
"accessToken" -> "...",
"hostNameInCertificate" -> "*.database.windows.net",
"encrypt" -> "true",
"ServerCertificate" -> "false",
"driver" -> "com.microsoft.sqlserver.jdbc.SQLServerDriver"
))
val collection = spark.read.sqlDB(config)
collection.show()
Token acquisition was done with msal (python):
import msal
TenantId = "...guid..."
authority = "https://login.microsoftonline.com/" + TenantId
scope = "https://database.windows.net//.default" #🤦 yes, with double "//"
ServicePrincipalId = "...guid..."
ServicePrincipalPwd = "secret"
app = msal.ConfidentialClientApplication(client_id=ServicePrincipalId, authority=authority, client_credential=ServicePrincipalPwd, )
result = None
result = app.acquire_token_silent(scopes=[scope], account=None)
if not result:
result = app.acquire_token_for_client(scope)
if "access_token" in result:
sqlAzureAccessToken = result["access_token"]

Problems with gradle-svntools-plugin when performing an SvnUpdate task

I am using the gradle-svntools-plugin to attempt to update my svn source etc. but am getting the following error when executing the task
A problem occurred evaluating root project 'XBRLReports'.
Cannot cast object '12345' with class 'java.lang.String' to class 'java.lang.Long'
Here is the task in question :
task updateSource(type: SvnUpdate){
username = svn_username
password = svn_password
if ( project.hasProperty("rev") ) {
revision = rev
println "Revision --- $rev"
}
doLast{
println "Revision --- " + revision
}
}
The issue arises when I try and pass a command line variable like so
gradlew updateSource -Prev=12345
Manually setting revision to a static value also causes the issue. Printing out the value of revision returns null. I am not sure if this is a bug or if I am not properly using the plugin. The documentation is vague for this task. Here is the link to it --
gradle-svntools-plugin SvnUpdate
I have opened a ticket on github as well.
Thank you
Try this :
if ( project.hasProperty("rev") ) {
revision = rev.toLong()
println "Revision --- $rev"
}

Gradle: Trying to figure out an EOF error. Userguide: Example 14.5. Configuring arbitrary objects using a script.

I am on page 78 of the gradle userguide: Example 14.5. Configuring arbitrary objects using a script.
I have copied all of the code in the example:
build.gradle
task configure << {
pos = java.text.FieldPosition( ) new 10
// Apply the script
apply from: 'other.gradle', to: pos
println pos.beginIndex
println pos.endIndex
}
other.gradle
beginIndex = 1;
endIndex = 5;
Output of gradle -q configure
D:\Gradle\ThisAndThat>gradle -q configure
FAILURE: Build failed with an exception.
Where: Build file 'D:\Gradle\ThisAndThat\build.gradle' line: 1
What went wrong: Could not compile build file 'D:\Gradle\ThisAndThat\build.gradle'.
> startup failed:
build file 'D:\Gradle\ThisAndThat\build.gradle': 1: expecting EOF, found 'configure' # line 1, column 6.
task configure << { ^
1 error
I cannot figure out why I am getting this error. Any help would be appreciated. Thanks!
When I literally copy the code from Chapter 14.5 of the userguide it works. Your mistake is in the build.gradle script:
task configure << {
pos = java.text.FieldPosition( ) new 10
should be
task configure << {
pos = new java.text.FieldPosition(10)

Resources