Assume i have a list of newspaper subscribers and their geographical location (f.e. postal codes) and the distance between different locations.
The goal is cluster subscribers together in 'rounds'. The size/length of a round is constrained by time: there is only limited time available for delivery each morning. This limits maximum distance per round.
Is their an algorithm that allows me to cluster subscribers/addresses into 'rounds' in such a manner that I minimise the number of required paperboys/rounds?
Related
Much of Vertex AI's pricing is calculated per node hour. What is a node hour and how do I go about estimating how many I'll need for a given job?
A node hour represents the time a virtual machine spends running your prediction job or waiting in a ready state to handle prediction or explanation requests. The cost of one node running for one hour is a node hour.
The price of a node hour varies across regions and by operation.
You can consume node hours in fractional increments. For example, one node running for 30 minutes costs 0.5 node hours.
There are tables in the pricing documentation that can help you estimate your costs, and you can use the Cloud Pricing Calculator.
You can also use Billing Reports to monitor your usage.
How can I calculate and cluster the Risk percentage and clustering them if the only details available is Profit per customer and number or orders?
I would like to parallelize population dynamics for individuals moving on a 2D landscape. The landscape will be divided into cells with each processing core operating on individuals that exist in a specific cell.
The problem is that because the individuals move they travel between cells. Meanwhile the positions of individuals in a given cell (and its neighboring cells) must be known at any point in time in order to determine when pairs of individuals can mate.
In openMPI, it would be necessary to pass the structures of individuals (in this case, a list of mutations, and their locations in a genome) as messages whenever they move to a different cell, which would be very slow.
However, it seems that in OpenMP there is a way for the processing cores to share the memory for the entire list of genomes / individuals (i.e., for all cells). In this case, there would be no need for message passing, and the code could be very efficient.
Is my understanding of openMP correct? The nodes on my cluster each contain 32 processing cores. Does this mean I am limited to sharing memory among these 32 cores?
Thank you
Unsure how the cost per share is calculated for an entire account. I've tried taking the Total Cost Basis / Billing Market Value, but that did not produce the same number that is showing. I've tried adding up all of the cost per shares for the lots held within the account and divided by the number of lots, but that also did not work.
Cost per share or "Cost Basis Per Unit" is calculated as the Cost Basis (with or without amortization based on settings) divided by the number of units in the lot.
What deterministic algorithm is suitable for the following resource allocation/scheduling problem?
Consider a set of players: P1, P2, P3 and P4. Each player receives data from a cell tower (e.g. in a wireless network). The tower transmits data in 1 second blocks. There are 5 blocks. Each player can be scheduled to receive data in an arbitrary number of the blocks.
Now, the amount of data received in each block is a constant (C) divided by the number of other players scheduled in the same block (because the bandwidth must be shared). A greedy approach would allocate each player to each block but then the data received per block would be reduced.
How can we find an allocation of the players to time-blocks so that the amount of data delivered by the network is maximised? I have tried a number of heuristic methods on this problem (Genetic Algorithm, Sim Anneal) and they work well. However, Id like to solve for the optimum schedule.