Get DataBricks Notebook Run id - azure-databricks

I have a Databricks NoteBook task in Azure datafactory, while it executes I want to fetch its run id and save it as a new field in the destination table. Is there a way you can suggest, Thanks.

Related

Write Data to SQL DW from Apache Spark in Azure Synapse

When I write data to SQL DW in Azure from Databricks I use the following code:
example1.write.format("com.databricks.spark.sqldw").option("url", sqlDwUrlSmall).option("dbtable", "SampleTable12").option("forward_spark_azure_storage_credentials","True") .option("tempdir", tempDir).mode("overwrite").save()
This won't work with with Notebook in Synapse Notebook. I get the error:
Py4JJavaError: An error occurred while calling o174.save.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.sqldw. Please find packages at http://spark.apache.org/third-party-projects.html
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:656) Caused by: java.lang.ClassNotFoundException: com.databricks.spark.sqldw.DefaultSource
Basically, I need to know the equivalent of com.databricks.spark.sqldw for Apache Spark in Azure Synapse.
Thanks
If you are writing to a dedicated SQL pool within the same Synapse workspace as your notebook, then it's as simple as calling the synapsesql method. A simple parameterised example in Scala, using the parameter cell feature of Synapse notebooks.
// Read the table
val df = spark.read.synapsesql(s"${pDatabaseName}.${pSchemaName}.${pTableName}")
// do some processing ...
// Write it back with _processed suffixed to the table name
df.write.synapsesql(s"${pDatabaseName}.${pSchemaName}.${pTableName}_processed", Constants.INTERNAL)
If you are trying to write from your notebook to a different dedicated SQL pool, or old Azure SQL Data Warehouse then it's a bit different but there some great examples here.
UPDATE: The items in curly brackets with the dollar-sign (eg ${pDatabaseName}) are parameters. You can designate a parameter cell in your notebook so parameters can be passed in externally eg from Azure Data Factory (ADF) or Synapse Pipelines using the Execute Notebook activity, and reused in the notebook, as per my example above. Find out more about Synapse Notebook parameters here.

UiPath - Unable to Insert ServiceNow Table Record: System.AggregateException

I am trying to automate via UiPath the insertion of data into a ServiceNow table.
I am following the step by step link
(https://connect.uipath.com/marketplace/components/servicenow-v1/media)
to automate the insertion.
However when running my project I get the following error:
I made and tested the connection with my instance normally, but this error occurs during the project run …
Can you help me?

Is there any way to deal with Minio file by sql-based statements?

I am new on Minio and object based databases.
I know that there is S3 select API but I want to add a new row or update a specific row in CSV file in Minio without need to download it and upload again.
Is there any way to do it?
In another words, I want to use sql based statements(insert/update) on a file stored in Minio.
You can only change Databases with SQL, it can only Import and Export CSVs so that they are usable for the Database. The Answer for now would then be a no. The easiest way you could achieve editing this csv would be to write a Script which either:
Connects to the Database and Changes the File in the Databases
Directory.
Downloads the File to edit it locally and then upload it again.

How to check who created database in impala

I have one Hadoop cluster (Cloudera distribution) given access to multiple user. Now from different users we are creating databases. How do i verify which user is creating which database.? Can anyone suggest me.?
Use below query:
Describe formatted databaseName.tableName;
Will show the owner and other details like table type,size etc.

Save data from database

I have created an application with internal database LightSwitch..
Now I want to publish my application and I want to publish also data of my internal database..How can I do?
for example : I have an application Fantacalcio and I created some players in my internal database of lightswitch..now when I publish my application and I install it in my pc there are no data in my application.. I want that when I install my application there must be players that I have created before..
You can do it programmatically in something like Application_Initialize, or in a SQL script.
LS has no "built-in" way to pre-populate data, so it's a matter of choosing a workaround.
One possible way is to do the following:
Attach the lightswitch internal database to SQL server
Export all the data into a SQL script, here are the instructions
After you have the sql script (mostly INSERT statements), then run
the script on your designated database.
The exact same data should now be populated there.

Resources