DBU Range for cluster: how is DBU consumption determined - azure-databricks

The cluster summary in databricks looks like the following:
It shows a DBU range of 1.5-2.25 DBU/h. In which scenario are 1.5 DBU/h used, in which scenario are 2.25 DBU/h used?
I guess it's because of the workers (but 1.5x2=3)

Converting the discussion in the comments so it might help other community members:
If you deselect auto scaling, then fixed number of workers will be assigned to the cluster which have their respective DBU. But when you are using auto-scaling option, then the dbu depends on the number of workers utilized. In your case, it is between 1.5 to 2.25. So, this will be determined by the operations that you are going to perform.
The DBU/h is allocated based on the number of workers. Each worker is allocated a DBU/h of 0.75 DBU/h and hence it is 1.5 DBU/h for 1 worker and 2.25 DBU/h for 2 workers and so on.
The following is a reference image indicating the same:

Related

About the effect of parallelism in StormCrawler

I am currently working on a Storm Crawler based project. We have a fixed and limited amount of bandwidth for fetching page from the web. We have 8 worker with a large value for parallelism hint for different Bolt in the topology (i.e. 50). So lots of thread created for fetching the page. Is there any relation between increasing number of fetch_error and increasing parallelism_hint in the project? How can I determine the good value for the parallelism_hint in the Storm Crawler?
The parallelism hint is not something that should be applied to all bolts indiscriminately.
Ideally, you need one instance of FetcherBolt per worker, so in your case 8. As you've probably read in the WIKI or seen in the conf, the FetcherBolt handles internal threads for fetching. This is determined by the config fetcher.threads.number which is set to 50 in the archetypes' configurations (assuming this is what you used as a starting point).
Using too many FetcherBolt instances is counterproductive. It is better to change the value of fetcher.threads.number instead. If you have 50 Fetcher instances with a default number of threads of 50, that would give you 2500 fetching threads which might be too much for your available bandwidth.
As I mentioned before you want 1 FetcherBolt per worker, the number of internal fetching threads per bolt depends on your bandwidth. There is no hard rule for this, it depends on your situation.
One constant I have observed however is the ratio of parsing bolts to Fetcher bolts; usually, 4 parsers per fetcher works fine. Run Storm in deployed mode and check the capacity value for the parser bolts in the UI. If the value is 1 or above, try using more instances and see if it affects the capacity.
In any case, not all bolts need the same level of parallelism.

How to increase greenplum concurrency and # query per sec

We have a fairly big Greenplum v4.3 cluster. 18 hosts, each host has 3 segment nodes. Each host has approx 40 cores and 60G memory.
The table we have is 30 columns wide, which has 0.1 billion rows. The query we are testing has 3-10 secs response time when there is no concurrency pressure. As we increase the # of queries we fired in parallel, the latency is decreasing from avg 3 secs to 50ish secs as expected.
But we've found that regardless how many queries we fired in parallel, we only have like very low QPS(query per sec), almost just 3-5 queries/sec. We've set the max_memory=60G, memory_limit=800MB, and active_statments=100, hoping the CPU and memory can be highly utilized, but they are still poorly used, like 30%-40%.
I have a strong feeling, we tried to feed up the cluster in parallel badly, hoping to take the best out of the CPU and Memory utilization. But it doesn't work as we expected. Is there anything wrong with the settings? or is there anything else I am not aware of?
There might be multiple reasons for such behavior.
Firstly, every Greenplum query uses no more than one processor core on one logical segment. Say, you have 3 segments on every node with 40 physical cores. Running two parallel queries will utilize maximum 2 x 3 = 6 cores on every node, so you will need about 40 / 6 ~= 6 parallel queries to utilize all of your CPUs. So, maybe for your number of cores per node its better to create more segments (gpexpand can do this). By the way, are the tables that used in the queries compressed?
Secondly, it may be a bad query. If you will provide a plan for the query, it may help to understand. There some query types in Greenplum that may have master as a bottleneck.
Finally, that might be some bad OS or blockdev settings.
I think this document page Managing Resources might help you mamage your resources
You can use Resource Group limit/controll your resource especialy concurrency attribute(The maximum number of concurrent transactions, including active and idle transactions, that are permitted in the resource group).
Resouce queue help limits ACTIVE_STATEMENTS
Note: The ACTIVE_STATEMENTS will be the total statement current running, when you have 50s cost queries and next incoming queries, this could not be working, mybe 5 * 50 is better.
Also, you need config memory/CPU settings to enable your query can be proceed.

Storm topology processing slowing down gradually

I have been reading about apache Storm tried few examples from storm-starter. Also learnt about how to tune the topology and how to scale it to perform fast enough to meet the required throughput.
I have created example topology with acking enabled, i am able to achieve 3K-5K messages processing per second. It performs really fast in initial 10 to 15min or around 1mil to 2mil message and then it starts slowing down. On storm UI, I can see the overall latency starts going up gradually and does not comes back, after a while the processing drops to only few hundred a second. I am getting exact same behavior for all the typologies i tried, the simplest one is to just read from kafka using KafkaSpout and send it to transform bolt parse the msg and send it to kafka again using KafkaBolt. The parser is very fast as it takes less than a millisecond to parse the message. I tried few option of increasing/describing the parallelism, changing the buffer sizes etc. but same behavior. Please help me to find out the reason for gradual slowness in the topology. Here is the config i am using
1 Nimbus machine (4 CPU) 24GB RAM
2 Supervisor machines (8CPU) and using 1 thread per core with 24GB RAM
4 Node kafka cluster running on above 2 supervisor machines (each topic has 4 partitions)
KafkaSpout(2 parallelism)-->TransformerBolt(8)-->KafkaBolt(2)
topology.executor.receive.buffer.size: 65536
topology.executor.send.buffer.size: 65536
topology.spout.max.batch.size: 65536
topology.transfer.buffer.size: 32
topology.receiver.buffer.size: 8
topology.max.spout.pending: 250
At the start
After few minutes
After 45 min - latency started going up
After 80 min - Latency will keep going up and will go till 100 sec by the time it reaches 8 to 10mil messages
Visual VM screenshot
Threads
Pay attention to the capacity metric on RT_LEFT_BOLT, it is very close to 1; which explains why your topology is slowing down.
From the Storm documentation:
The Storm UI has also been made significantly more useful. There are new stats "#executed", "execute latency", and "capacity" tracked for all bolts. The "capacity" metric is very useful and tells you what % of the time in the last 10 minutes the bolt spent executing tuples. If this value is close to 1, then the bolt is "at capacity" and is a bottleneck in your topology. The solution to at-capacity bolts is to increase the parallelism of that bolt.
Therefore, your solution is to add more executors (and tasks) to that given bolt (RT_LEFT_BOLT). Another thing you can do is reduce the number of executors on RT_RIGHT_BOLT the capacity indicates you don't need that many executors, probably 1 or 2 can do the job.
The issue was due to GC setting with newgen params, it was not using the allocated heap completely so internal storm queues were getting full and running out of memory. The strange thing was that storm did not throw out of memory error, it just got stalled, with the help of visual vm i was able to trace it down.

WebSphere Liberty auto scaling CPU metric

WAS Liberty has auto-scaling feature and one of the metrics is the CPU metric.
My question is about the case when I have many JVMs being members of different clusters running on the same host. Is Liberty measuring the CPU consumption of each JVM or is it per host? If it is host-wide, then scaling decisions are almost meaningless because how does it know which JVM is actually consuming all the CPU time and needs to be scaled out? If I have 5 JVMs and 4 of those are not doing any work and one JVM is taking 99% of my CPU on the host, I only need to scale that JVM, not all five of them.
The manual says metrics trigger on averages across a cluster, so for the scenario you outline:
If they are all in a single cluster, there is really no scaling action required and it wouldn't likely help.
If the spinning JVM is in a single or very small cluster (w/o the other 4) then it could trip the CPU metric depending on the average %cpu and the configuration
The usage of the resources is compared against its threshold at each level (JVM, Host and Cluster). When the usage goes beyond the threshold at one of the levels, the scale out action is executed.

UDF optimization in Hadoop

I testing my UDF on Windows virtual machine with 8 cores and 8 GB RAM. I have created 5 files of 2 GB about and run the pig script after modifying "mapred.tasktracker.map.tasks.maximum".
The following runtime and statistics:
mapred.tasktracker.map.tasks.maximum = 2
duration = 20 min 54 sec
mapred.tasktracker.map.tasks.maximum = 4
duration = 13 min 38 sec and about 30 sec for task
35% better
mapred.tasktracker.map.tasks.maximum = 8
duration = 12 min 44 sec and about 1 min for task
only 7% better
Why such a small improvement when changing settings? any ideas? Job was divided into 145 tasks.
![4 slots][1]
![8 slots][2]
Couple of observations:
I imagine your windows machine only has a single disk backing this VM - so there is a limit to how much data you can read off disk at any one time (and write back for the spills). By increasing the task slots, your effectively driving up the read / write demands on your disk (and a more disk thrashing too potentially). If you have multiple disks backing your VM (and not virtual disks all on the same physical disk, i mean virtual disks backed by different physical disks), you would probably see a performance increase over what you've already seen.
By adding more map slots, you've reduced the amount of assignment waves that the Job Tracker needs to do - and each wave has a polling overhead (TT polling the jobs, JT polling the TTs and assigning new tasks to free slots). A 2 slot TT vs 8 slot TT will mean that you have 145/2=~73 assignment waves (if all tasks ran in equal time - obviously not realistic) vs 145/8=~19 waves - thats a ~3x increase in the amount of polling needed to be done (and it all adds up).
mapred.tasktracker.map.tasks.maximum configures the maximum number of map tasks that will be run simultaneously by a task tracker. There is a practical hardware limit to how many tasks a single node can run at a time. So there will be diminishing returns when you keep increasing this number.
For example, say the tasktracker node has 8 cores. Say 4 cores are being used by processes other than the tasktracker. That leaves 4 cores for the mapred tasks. So your task time will improve from mapred.tasktracker.map.tasks.maximum = 1 to 4, but after that, it would just remain static because the other tasks will just be waiting. In fact, if you increase it too much, the contention and context switching might make it slower. The recommended value for this parameter is the No. of CPU cores - 1

Resources