AZURE_QUOTA_EXCEEDED_EXCEPTION , How to solve this? - azure-databricks

Quota Exceeded
When trying to start warehouse in azure databricks sql analytics, the error while importing data to dashboard

You need to follow documentation on increasing corresponding quota for Azure.
But really, you need to deploy a new workspace that won't require public IPs for cluster nodes - so called secure cluster connectivity, or No Public IP (doc). Unfortunately right now you can't change this setting for existing workspace.

Related

can databricks cluster be shared across workspace?

My ultimate goal is to differentiate/manage the cost on databricks (azure) based on different teams/project.
And I was thinking whether I could utilize workspace to achieve this.
I read below , it sounds like workspace can access a cluster, but does not say whether multiple workspace can access the same cluster or not.
A Databricks workspace is an environment for accessing all of your Databricks assets. The workspace organizes objects (notebooks, libraries, and experiments) into folders, and provides access to data and computational resources such as clusters and jobs.
In other words, can I creat a cluster and somehow ensure can be only accessed by certain project or team or workspace?
To manage whom can access a particular cluster, you can make use of cluster access control. With cluster access control, you can determine what users can do on the cluster. E.g. attach to the cluster, the ability to restart it or to fully manage it. You can do this on a user level but also on a user group level. Note that you have to be on Azure Databricks Premium Plan to make use of cluster access control.
You also mentioned that your ultimate goal is to differentiate/manage costs on Azure Databricks. For this you can make use of tags. You can tag workspaces, clusters and pools which are then propagated to cost analysis reports in the Azure portal (see here).

How to use Azure Spot instances on Databricks

Spot instances brings the posibility to use free resources in the cloud paying a lower price, however if the cloud demand is increased your resources will be dealocated. This is very usefull for non critical workloads whenever you can aford to loose some of the work done. More info 2 3
Databricks has the posibility to run spot instances on AWS but there is no documentation about how to do it on Azure.
Is it possible to run Databricks clusters on Azure Spot instances?
Yes, it is possible but not using Databricks UI. To use Azure spot instances on Databricks you need to use databricks cli.
Note
With the cli tool is it possible to administrate -create, edit, delete- clusters and instances-pools. However, to simplify the process, I'll focus on editing an existing cluster.
You can install databricks cli using pip install databricks-cli and configure your credentials with databricks configure --token. For more information, visit databricks documentation.
Run the command datbricks clusters list to know the ID of the cluster you want to modify:
$ datbricks clusters list
0422-112415-fifes919 Big Spark3 TERMINATED
0612-341234-jails230 Normal Spark3 TERMINATED
0212-623261-mopes727 Small 7.6 TERMINATED
In my case, I have 3 clusters. First column is the cluster ID, second one is the name of the cluster. Last column is the state.
The command databricks cluster get generates the cluster config in json format. Let's generate the json file to modify it:
databricks clusters get --cluster-id 0422-112415-fifes919 > /tmp/my_cluster.json
This file contains all the configuration related to the cluster like name, instance type, owner... In our case we are looking for the azure_attributes section. You will see something similar to:
...
"azure_attributes": {
"first_on_demand": 1,
"availability": "ON_DEMAND_AZURE",
"spot_bid_max_price": -1.0
},
...
We need to change the availability to SPOT_WITH_FALLBACK_AZURE and spot_bid_max_price with our bid price. Edit the file with your favorite tool. The result should be something like:
...
"azure_attributes": {
"first_on_demand": 1,
"availability": "SPOT_WITH_FALLBACK_AZURE",
"spot_bid_max_price": 0.4566
},
...
Once modified, just update the cluster with the new configuration file using databricks clusters edit:
databricks clusters edit --json-file /tmp/my_cluster.json
Now, everytime you start the cluster, the workers will be spot instances.To confirm this, you can go to the configuration tab inside the worker VM that is allocated in the resource group managed by databricks. You will see the Azure spot is active and with the price configured.
Databricks on AWS has more configuration options like SPOT for the availability field. However, until the documentation is released we'll need to wait or configure with try-error approach.

Setup auto scaling elasticsearch behind GCP load balancer

I have setup the Elasticsearch Certified by Bitnami on GCP
Which I would link to put behind the HTTP(S) Load Balancing on GCP for auto scaling propose. What I have done is create snapshot and use it to create image for instance template. But the Instance group continuous return "instance in being verified" and "Recreated instance" for long time do I don't know where the problem is so I design to use the default instance template from GCP instead.
My question is, when the new node created of when the data in elasticsearch updated how can I sync data between node in the GCP load balancer? Think about when there is high traffic and load balancer created the new coming node, and when the query come in from load balance how the new node have the exactly same data with the existing node or when the new index come in, all the node get the new index.
PS I dont mind for the delay if it less than 5 mins it is acceptable.
Thanks in advance for helping out.
In GCP, if you want to sync your data between nodes, we recommend using a centralized location to store your data. You can use Cloud Storage, Cloud SQL, Cloud File System etc. You can check this link to find more options for the data storage. Then you can create an instance template that specifies that when any instance is created it will use the custom image and has access to that centralized database. This is a recommended workaround rather than replicate new instances with data. You can find this link for the similar kind of thread.
For your Elasticsearch setup, I'll recommend deploying an Elasticsearch Cluster that provides multiple VMs that you can customize the configuration. If deploying cluster, this other Stackoverflow post suggest that is not not necessary to use a load balancer as Elasticsearch handles the load between the nodes.

Cluster CDH installation stuck at download

I am trying to set up a cluster on 3 nodes on a Cloud Server with Cloudera Manager. But at Cluster installation step, it gets stuck at 64%. Please guide me on how to go forward with it and where to see logs of the same.
Following is the image of the installation screen
Some cloud companies have policies in which they if lots of data requests are coming, they remove the IP from public hostings for sometime. This is done to prevent DDoS attacks.
A solution can be to ask them to raise the data transfer limit.

Connect hadoop cluster to mutiple Google Cloud Storage backets in multiple Google Projects

It is possible, to connect my Hadoop cluster to multiple Google Cloud Projects at once ?
I can easly use any Google Storage bucket in single Google Project via Google Cloud Storage Connector as explained in this thread Migrating 50TB data from local Hadoop cluster to Google Cloud Storage. But i can't find any documentation or example how to connect to two or more Google Cloud Project from single map-reduce job. Do You have any suggestion/trick ?
Thanks a lot.
Indeed, it is possible to connect your cluster to buckets from multiple different projects at once. Ultimately, if you're using the instructions for using a service-account keyfile, the GCS requests are performed on behalf of that service-account, which can be treated more-or-less like any other user. You can either add the service account email your-service-account-email#developer.gserviceaccount.com to all the different cloud projects owning buckets you want to process, using the permissions section of cloud.google.com/console and simply adding that email address like any other member, or you can set GCS-level access to add that service-account like any other user.

Resources