How to speed up the addition of a new column in pandas, based on comparisons on an existing one - performance

I am working on a large-ish dataframe collection with some machine data in several tables. The goal is to add a column to every table which expresses the row's "class", considering its vicinity to a certain time stamp.
seconds = 1800
for i in range(len(tables)): # looping over 20 equally structured tables containing machine data
table = tables[i]
table['Class'] = 'no event'
for event in events[i].values: # looping over 20 equally structured tables containing events
event_time = event[1] # get integer time stamp
start_time = event_time - seconds
table.loc[(table.Time<=event_time) & (table.Time>=start_time), 'Class'] = 'event soon'
The event_times and the entries in table.Time are integers. The point is to assign the class "event soon" to all rows in a specific time frame before an event (the number of seconds).
The code takes quite long to run, and I am not sure what is to blame and what can be fixed. The amount of seconds does not have much impact on the runtime, so the part where the table is actually changed is probabaly working fine and it may have to do with the nested loops instead. However, I don't see how to get rid of them. Hopefully, there is a faster, more pandas way to go about adding this class column.
I am working with Python 3.6 and Pandas 0.19.2

You can use numpy broadcasting to do this vectotised instead of looping
Dummy data generation
num_tables = 5
seconds=1800
def gen_table(count):
for i in range(count):
times = [(100 + j)**2 for j in range(i, 50 + i)]
df = pd.DataFrame(data={'Time': times})
yield df
def gen_events(count, num_tables):
for i in range(num_tables):
times = [1E4 + 100 * (i + j )**2 for j in range(count)]
yield pd.DataFrame(data={'events': times})
tables = list(gen_table(num_tables)) # a list of 5 DataFrames of length 50
events = list(gen_events(5, num_tables)) # a list of 5 DataFrames of length 5
Comparison
For debugging, I added a dict of verification DataFrames. They are not needed, I just used them for debugging
verification = {}
for i, (table, event_df) in enumerate(zip(tables, events)):
event_list = event_df['events']
time_diff = event_list.values - table['Time'].values[:,np.newaxis] # This is where the magic happens
events_close = np.any( (0 < time_diff) & (time_diff < seconds), axis=1)
table['Class'] = np.where(events_close, 'event soon', 'no event')
# The stuff after this line can be deleted since it's only used for the verification
df = pd.DataFrame(data=time_diff, index=table['Time'], columns=event_list)
df['event'] = np.any((0 < time_diff) & (time_diff < seconds), axis=1)
verification[i] = df
newaxis
A good explanation on broadcasting is in Jakevdp's book
table['Time'].values[:,np.newaxis]
gives a (50,1) 2-d array
array([[10000],
[10201],
[10404],
....
[21609],
[21904],
[22201]], dtype=int64)
Verification
For the first step the verification df looks like this:
events 10000.0 10100.0 10400.0 10900.0 11600.0 event
Time
10000 0.0 100.0 400.0 900.0 1600.0 True
10201 -201.0 -101.0 199.0 699.0 1399.0 True
10404 -404.0 -304.0 -4.0 496.0 1196.0 True
10609 -609.0 -509.0 -209.0 291.0 991.0 True
10816 -816.0 -716.0 -416.0 84.0 784.0 True
11025 -1025.0 -925.0 -625.0 -125.0 575.0 True
11236 -1236.0 -1136.0 -836.0 -336.0 364.0 True
11449 -1449.0 -1349.0 -1049.0 -549.0 151.0 True
11664 -1664.0 -1564.0 -1264.0 -764.0 -64.0 False
11881 -1881.0 -1781.0 -1481.0 -981.0 -281.0 False
12100 -2100.0 -2000.0 -1700.0 -1200.0 -500.0 False
12321 -2321.0 -2221.0 -1921.0 -1421.0 -721.0 False
12544 -2544.0 -2444.0 -2144.0 -1644.0 -944.0 False
....
20449 -10449.0 -10349.0 -10049.0 -9549.0 -8849.0 False
20736 -10736.0 -10636.0 -10336.0 -9836.0 -9136.0 False
21025 -11025.0 -10925.0 -10625.0 -10125.0 -9425.0 False
21316 -11316.0 -11216.0 -10916.0 -10416.0 -9716.0 False
21609 -11609.0 -11509.0 -11209.0 -10709.0 -10009.0 False
21904 -11904.0 -11804.0 -11504.0 -11004.0 -10304.0 False
22201 -12201.0 -12101.0 -11801.0 -11301.0 -10601.0 False

Small optimizations of original answer.
You can shave a few lines and some assignments of the original algorithm
for table, event_df in zip(tables, events):
table['Class'] = 'no event'
for event_time in event_df['events']: # looping over 20 equally structured tables containing events
start_time = event_time - seconds
table.loc[table['Time'].between(start_time, event_time), 'Class'] = 'event soon'
You might shave some more if instead of the text 'no event' and 'event soon' you would just use booleans

Related

How to add a maximum travel time duration for the sum of all routes in VRP Google OR-TOOLS

I am new to programming and used Google OR-tools to create my VRP model. In my current model, I have included a general time window and capacity constraint per vehicle, creating a capacitated vehicle routing problem with time windows. I followed the OR-tools guides which contains a maximum travel duration for each vehicle.
However, I want to include a maximum travel duration for the sum of all routes, whereas the maximum travel duration for each vehicle does not matter (so I set it to 100.000). Accorddingly, I want to create something in the model/solution printer that tells me which amount of addresses could not be visited due to the constraint on the maximum travel duration for the sum of all routes. From the examples I have seen I think it would be kind of easy, but my knowledge on programming is fairly limited, so my attempts had no succes. Can anyone help me?
import pandas as pd
import openpyxl
import numpy as np
import math
from random import sample
from ortools.constraint_solver import routing_enums_pb2
from ortools.constraint_solver import pywrapcp
from scipy.spatial.distance import squareform, pdist
from haversine import haversine
#STEP - create data
# import/read excel file
data = pd.read_excel(r'C:\Users\Jean-Paul\Documents\Thesis\OR TOOLS\Data.xlsx', engine = 'openpyxl')
df = pd.DataFrame(data, columns= ['number','lat','lng']) # create dataframe with 10805 addresses + address of the depot
#print (df)
# randomly sample X addresses from the dataframe and their corresponding number/latitude/longtitude
df_sample = df.sample(n=100)
#print (df_data)
# read first row of the excel file (= coordinates of the depot)
df_depot = pd.DataFrame(data, columns= ['number','lat','lng']).iloc[0:1]
#print (df_depot)
# combine dataframe of depot and sample into one dataframe
df_data = pd.concat([df_depot, df_sample], ignore_index=True, sort=False)
#print (df_data)
#STEP - create distance matrix data
# determine distance between latitude and longtitude
df_data.set_index('number', inplace=True)
matrix_distance = pd.DataFrame(squareform(pdist(df_data, metric=haversine)), index=df_data.index, columns=df_data.index)
matrix_list = np.array(matrix_distance)
#print (matrix_distance) # create table of distances between addresses including headers
#print (matrix_list) # converting table to list of lists and exclude headers
#STEP - create time matrix data
travel_time = matrix_list / 15 * 60 # divide distance by travel speed 20 km/h and multiply by 60 minutes
#print (travel_time) # converting distance matrix to travel time matrix
#STEP - create time window data
# create list for each sample - couriers have to visit this address within 0-X minutes of time using a list of lists
window_range = []
for i in range(len(df_data)):
list = [0, 240]
window_range.append(list) # create list of list with a time window range for each address
#print (window_range)
#STEP - create demand data
# create list for each sample - all addresses demand 1 parcel except the depot
demand_range = []
for i in range(len(df_data.iloc[0:1])):
list = 0
demand_range.append(list)
for j in range(len(df_data.iloc[1:])):
list2 = 1
demand_range.append(list2)
#print (demand_range)
#STEP - create fleet size data # amount of vehicles in the fleet
fleet_size = 6
#print (fleet_size)
#STEP - create capacity data for each vehicle
fleet_capacity = []
for i in range(fleet_size): # capacity per vehicle
list = 20
fleet_capacity.append(list)
#print (fleet_capacity)
#STEP - create data model that stores all data for the problem
def create_data_model():
data = {}
data['time_matrix'] = travel_time
data['time_windows'] = window_range
data['num_vehicles'] = fleet_size
data['depot'] = 0 # index of the depot
data['demands'] = demand_range
data['vehicle_capacities'] = fleet_capacity
return data
#STEP - creating the solution printer
def print_solution(data, manager, routing, solution):
"""Prints solution on console."""
print(f'Objective: {solution.ObjectiveValue()}')
time_dimension = routing.GetDimensionOrDie('Time')
total_time = 0
for vehicle_id in range(data['num_vehicles']):
index = routing.Start(vehicle_id)
plan_output = 'Route for vehicle {}:\n'.format(vehicle_id)
while not routing.IsEnd(index):
time_var = time_dimension.CumulVar(index)
plan_output += '{0} Time({1},{2}) -> '.format(
manager.IndexToNode(index), solution.Min(time_var),
solution.Max(time_var))
index = solution.Value(routing.NextVar(index))
time_var = time_dimension.CumulVar(index)
plan_output += '{0} Time({1},{2})\n'.format(manager.IndexToNode(index),
solution.Min(time_var),
solution.Max(time_var))
plan_output += 'Time of the route: {}min\n'.format(
solution.Min(time_var))
print(plan_output)
total_time += solution.Min(time_var)
print('Total time of all routes: {}min'.format(total_time))
#STEP - create the VRP solver
def main():
# instantiate the data problem
data = create_data_model()
# create the routing index manager
manager = pywrapcp.RoutingIndexManager(len(data['time_matrix']),
data['num_vehicles'], data['depot'])
# create routing model
routing = pywrapcp.RoutingModel(manager)
#STEP - create demand callback and dimension for capacity
# create and register a transit callback
def demand_callback(from_index):
"""Returns the demand of the node."""
# convert from routing variable Index to demands NodeIndex
from_node = manager.IndexToNode(from_index)
return data['demands'][from_node]
demand_callback_index = routing.RegisterUnaryTransitCallback(
demand_callback)
routing.AddDimensionWithVehicleCapacity(
demand_callback_index,
0, # null capacity slack
data['vehicle_capacities'], # vehicle maximum capacities
True, # start cumul to zero
'Capacity')
#STEP - create time callback
# create and register a transit callback
def time_callback(from_index, to_index):
"""Returns the travel time between the two nodes."""
# convert from routing variable Index to time matrix NodeIndex
from_node = manager.IndexToNode(from_index)
to_node = manager.IndexToNode(to_index)
return data['time_matrix'][from_node][to_node]
transit_callback_index = routing.RegisterTransitCallback(time_callback)
# define cost of each Arc (costs in terms of travel time)
routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index)
# STEP - create a dimension for the travel time (TIMEWINDOW) - dimension keeps track of quantities that accumulate over a vehicles route
# add time windows constraint
time = 'Time'
routing.AddDimension(
transit_callback_index,
2, # allow waiting time (does not have an influence in this model)
100000, # maximum total route lenght in minutes per vehicle (does not have an influence because of capacity constraint)
False, # do not force start cumul to zero
time)
time_dimension = routing.GetDimensionOrDie(time)
# add time window constraints for each location except depot
for location_idx, time_window in enumerate(data['time_windows']):
if location_idx == data['depot']:
continue
index = manager.NodeToIndex(location_idx)
time_dimension.CumulVar(index).SetRange(time_window[0], time_window[1])
# add time window constraint for each vehicle start node
depot_idx = data['depot']
for vehicle_id in range(data['num_vehicles']):
index = routing.Start(vehicle_id)
time_dimension.CumulVar(index).SetRange(
data['time_windows'][depot_idx][0],
data['time_windows'][depot_idx][1])
#STEP - instantiate route start and end times to produce feasible times
for i in range(data['num_vehicles']):
routing.AddVariableMinimizedByFinalizer(
time_dimension.CumulVar(routing.Start(i)))
routing.AddVariableMinimizedByFinalizer(
time_dimension.CumulVar(routing.End(i)))
#STEP - setting default search parameters and a heuristic method for finding the first solution
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
search_parameters.first_solution_strategy = (
routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC)
#STEP - solve the problem with the serach parameters and print solution
solution = routing.SolveWithParameters(search_parameters)
if solution:
print_solution(data, manager, routing, solution)
if __name__ == '__main__':
main()
See #Mizux's answer, going under-the-hood in the solver to make a summation cost over all vehicle route lengths:
https://stackoverflow.com/a/68756570/13773745

Confused about the use of validation set here

For the main.py of the px2graph project, the part of training and validation is shown as below:
splits = [s for s in ['train', 'valid'] if opt.iters[s] > 0]
start_round = opt.last_round - opt.num_rounds
# Main training loop
for round_idx in range(start_round, opt.last_round):
for split in splits:
print("Round %d: %s" % (round_idx, split))
loader.start_epoch(sess, split, train_flag, opt.iters[split] * opt.batchsize)
flag_val = split == 'train'
for step in tqdm(range(opt.iters[split]), ascii=True):
global_step = step + round_idx * opt.iters[split]
to_run = [sample_idx, summaries[split], loss, accuracy]
if split == 'train': to_run += [optim]
# Do image summaries at the end of each round
do_image_summary = step == opt.iters[split] - 1
if do_image_summary: to_run[1] = image_summaries[split]
# Start with lower learning rate to prevent early divergence
t = 1/(1+np.exp(-(global_step-5000)/1000))
lr_start = opt.learning_rate / 15
lr_end = opt.learning_rate
tmp_lr = (1-t) * lr_start + t * lr_end
# Run computation graph
result = sess.run(to_run, feed_dict={train_flag:flag_val, lr:tmp_lr})
out_loss = result[2]
out_accuracy = result[3]
if sum(out_loss) > 1e5:
print("Loss diverging...exiting before code freezes due to NaN values.")
print("If this continues you may need to try a lower learning rate, a")
print("different optimizer, or a larger batch size.")
return
time_str = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print("{}: step {}, loss {:g}, acc {:g}".format(time_str, global_step, out_loss, out_accuracy))
# Log data
if split == 'valid' or (split == 'train' and step % 20 == 0) or do_image_summary:
writer.add_summary(result[1], global_step)
writer.flush()
# Save training snapshot
saver.save(sess, 'exp/' + opt.exp_id + '/snapshot')
with open('exp/' + opt.exp_id + '/last_round', 'w') as f:
f.write('%d\n' % round_idx)
It seems that the author only get the result of each batch of the validation set. I am wondering, if I want to observe whether the model is improving or reaching the best performance, should I use the result on the whole validation set?
If the validation set is small enough, we could calculate the loss, accuracy on the whole validation set during training to observe the performance. However, if the validation set is too large, it is better to calculate batch-wise validation results and for multiple steps.

Can I use Tarantool instead of Redis?

I'd like to use Tarantool to store data. How can I store data with TTL and simple logic (without the spaces)?
Like this:
box:setx(key, value, ttl);
box:get(key)
Yes, you can expire data in Tarantool and in a much more flexible way than in Redis. Though you can't do this without spaces, because space is a container for data in Tarantool (like database or table in other database systems).
In order to expire data, you have to install expirationd tarantool rock using tarantoolctl rocks install expirationd command. Full documentation on expirationd daemon can be found here.
Feel free to use a sample code below:
#!/usr/bin/env tarantool
package.path = './.rocks/share/tarantool/?.lua;' .. package.path
local fiber = require('fiber')
local expirationd = require('expirationd')
-- setup the database
box.cfg{}
box.once('init', function()
box.schema.create_space('test')
box.space.test:create_index('primary', {parts = {1, 'unsigned'}})
end)
-- print all fields of all tuples in a given space
local print_all = function(space_id)
for _, v in ipairs(box.space[space_id]:select()) do
local s = ''
for i = 1, #v do s = s .. tostring(v[i]) .. '\t' end
print(s)
end
end
-- return true if tuple is more than 10 seconds old
local is_expired = function(args, tuple)
return (fiber.time() - tuple[3]) > 10
end
-- simply delete a tuple from a space
local delete_tuple = function(space_id, args, tuple)
box.space[space_id]:delete{tuple[1]}
end
local space = box.space.test
print('Inserting tuples...')
space:upsert({1, '0 seconds', fiber.time()}, {})
fiber.sleep(5)
space:upsert({2, '5 seconds', fiber.time()}, {})
fiber.sleep(5)
space:upsert({3, '10 seconds', fiber.time()}, {})
print('Tuples are ready:\n')
print_all('test')
print('\nStarting expiration daemon...\n')
-- start expiration daemon
-- in a production full_scan_time should be bigger than 1 sec
expirationd.start('expire_old_tuples', space.id, is_expired, {
process_expired_tuple = delete_tuple, args = nil,
tuples_per_iteration = 50, full_scan_time = 1
})
fiber.sleep(5)
print('\n\n5 seconds passed...')
print_all('test')
fiber.sleep(5)
print('\n\n10 seconds passed...')
print_all('test')
fiber.sleep(5)
print('\n\n15 seconds passed...')
print_all('test')
os.exit()

Calculate running/cumulative cost of EC2 spot instance

I often run spot instances on EC2 (for Hadoop task jobs, temporary nodes, etc.) Some of these are long-running spot instances.
Its fairly easy to calculate the cost for on-demand or reserved EC2 instances - but how do I calculate the cost incurred for a specific node (or nodes) that are running as spot instances?
I am aware that the cost for a spot instance changes every hour depending on market rate - so is there any way to calculate the cumulative total cost for a running spot instance? Through an API or otherwise?
OK I found a way to do this in the Boto library. This code is not perfect - Boto doesn't seem to return the exact time range, but it does get the historic spot prices more or less within a range. The following code seems to work quite well. If anyone can improve on it, that would be great.
import boto, datetime, time
# Enter your AWS credentials
aws_key = "YOUR_AWS_KEY"
aws_secret = "YOUR_AWS_SECRET"
# Details of instance & time range you want to find spot prices for
instanceType = 'm1.xlarge'
startTime = '2012-07-01T21:14:45.000Z'
endTime = '2012-07-30T23:14:45.000Z'
aZ = 'us-east-1c'
# Some other variables
maxCost = 0.0
minTime = float("inf")
maxTime = 0.0
totalPrice = 0.0
oldTimee = 0.0
# Connect to EC2
conn = boto.connect_ec2(aws_key, aws_secret)
# Get prices for instance, AZ and time range
prices = conn.get_spot_price_history(instance_type=instanceType,
start_time=startTime, end_time=endTime, availability_zone=aZ)
# Output the prices
print "Historic prices"
for price in prices:
timee = time.mktime(datetime.datetime.strptime(price.timestamp,
"%Y-%m-%dT%H:%M:%S.000Z" ).timetuple())
print "\t" + price.timestamp + " => " + str(price.price)
# Get max and min time from results
if timee < minTime:
minTime = timee
if timee > maxTime:
maxTime = timee
# Get the max cost
if price.price > maxCost:
maxCost = price.price
# Calculate total price
if not (oldTimee == 0):
totalPrice += (price.price * abs(timee - oldTimee)) / 3600
oldTimee = timee
# Difference b/w first and last returned times
timeDiff = maxTime - minTime
# Output aggregate, average and max results
print "For: one %s in %s" % (instanceType, aZ)
print "From: %s to %s" % (startTime, endTime)
print "\tTotal cost = $" + str(totalPrice)
print "\tMax hourly cost = $" + str(maxCost)
print "\tAvg hourly cost = $" + str(totalPrice * 3600/ timeDiff)
I've re-written Suman's solution to work with boto3. Make sure to use utctime with the tz set!:
def get_spot_instance_pricing(ec2, instance_type, start_time, end_time, zone):
result = ec2.describe_spot_price_history(InstanceTypes=[instance_type], StartTime=start_time, EndTime=end_time, AvailabilityZone=zone)
assert 'NextToken' not in result or result['NextToken'] == ''
total_cost = 0.0
total_seconds = (end_time - start_time).total_seconds()
total_hours = total_seconds / (60*60)
computed_seconds = 0
last_time = end_time
for price in result["SpotPriceHistory"]:
price["SpotPrice"] = float(price["SpotPrice"])
available_seconds = (last_time - price["Timestamp"]).total_seconds()
remaining_seconds = total_seconds - computed_seconds
used_seconds = min(available_seconds, remaining_seconds)
total_cost += (price["SpotPrice"] / (60 * 60)) * used_seconds
computed_seconds += used_seconds
last_time = price["Timestamp"]
# Difference b/w first and last returned times
avg_hourly_cost = total_cost / total_hours
return avg_hourly_cost, total_cost, total_hours
You can subscribe to the spot instance data feed to get charges for your running instances dumped to an S3 bucket. Install the ec2 toolset and then run:
ec2-create-spot-datafeed-subscription -b bucket-to-dump-in
Note: you can have only one data feed subscription for your entire account.
In about an hour you should start seeing gzipped tabbed delimited files show up in the bucket that look something like this:
#Version: 1.0
#Fields: Timestamp UsageType Operation InstanceID MyBidID MyMaxPrice MarketPrice Charge Version
2013-05-20 14:21:07 UTC SpotUsage:m1.xlarge RunInstances:S0012 i-1870f27d sir-b398b235 0.219 USD 0.052 USD 0.052 USD 1
I have recently developed a small python library that calculates the cost of a single EMR cluster, or for a list of clusters (given a period of days).
It takes into account Spot instances and Task nodes as well (that may go up and down while the cluster is still running).
In order to calculate the cost I use the bid price, which (in many cases) might not be the exact price that you end up paying for the instance.
Depending on your bidding policy however, this price can be accurate enough.
You can find the code here: https://github.com/memosstilvi/emr-cost-calculator

Sorting and Balancing Across Multiple Columns

Problem
I have a Hash of data that looks something like this.
{ "GROUP_A" => [22, 440],
"GROUP_B" => [14, 70],
"GROUP_C" => [60, 620],
"GROUP_D" => [174, 40],
"GROUP_E" => [4, 12]
# ...few hundred more
}
GROUP_A has 22 accounts and they are using 440GB of data...and so on. There are a couple hundred of these groups. Some have a lot of accounts but use very little storage and some have only a few users and use A LOT of storage, some are just average.
I have X number of buckets (servers) that I want to put these groups of accounts into, and I want there to be approximately the same number of accounts per bucket and have each bucket also contain approximately the same amount of data. Number of groups is not important, so if a bucket had 1 group of 1000 accounts using 500GB of data and the next bucket had 10 groups of 97 accounts (970 total) using 450GB of data...I'd call it good.
So far I've not come up with an algorithm that will do this. In my mind I'm thinking of something like this perhaps?
PASS 1
Bucket 1: Group with largest data, 60 users.
Bucket 2: Next largest data group, 37 users.
Bucket 3: Next largest data group, 72 users.
Bucket 4: etc....
PASS 2
Bucket 1: Add a group with small amount of data, but more users than average.
# There's probably a ratio I can calculate to figure this out...divide users/datavmaybe?
Bucket 2: Find a "small data" group where sum of users in Bucket 1 ~= sum of users in Bucket 2
# But then there's no guarantee that the data usages will be close enough
Bucket 3: etc...
PASS 3
Bucket 1: Now what? Back to next largest data group?
I still think there's a better way to figure this out but it's not coming to me. If anyone has any thoughts I'm open to suggestions.
Matt
Solution 1.1 - Brute Force Update
Well....here's an update to the first attempt. This is still not a "knapsack-problem" solution. Just brute forcing the data so the accounts balance across buckets. This time I added some logic so that if a bucket has a higher full percentage of accounts vs. data...it will find the largest group (by data) that fits best based on number of accounts. I get a lot better distribution of data now vs. my first attempt (see the edit history if you want to look at the first attempt).
Right now I load each bucket in sequence, filling bucket one, then bucket two, etc... I think if I was to modify the code so that I filled them simultaneously (or nearly so) I'd get a better data balance.
e.g. 1st department into bucket 1, 2nd department into bucket 2, etc...until all buckets have one department... Then start back with bucket 1 again.
dept_arr_sorted_by_acct = dept_hsh.sort_by {|key, value| value[0]}
ap "MAX ACCTS: #{max_accts} AVG ACCTS: #{avg_accts}"
ap "MAX SIZE: #{max_size} AVG SIZE: #{avg_data}"
# puts dept_arr_sorted_by_acct
# exit
bucket_arr = Array.new
used_hsh = Hash.new
server_names.each do |s|
bucket_hsh = Hash.new
this_accts=0
this_data=0
my_key=""
my_val=[]
accts=0
data=0
accts_space_pct_used = 0
data_space_pct_used = 0
while this_accts < avg_accts
if accts_space_pct_used <= data_space_pct_used
# This loop runs if the % used of accts is less than % used of data
dept_arr_sorted_by_acct.each do |val|
# Sorted by num accts - ascending. Loop until we find the last entry in the array that has <= accts than what we need
next if used_hsh.has_key?(val[0])
#do nothing
if val[1][0] <= avg_accts-this_accts
my_key = val[0]
my_val = val[1]
accts = val[1][0]
data = val[1][1]
end
end
else
# This loop runs if the % used of data is less than % used of accts
dept_arr_sorted_by_data = dept_arr_sorted_by_acct.sort { |a,b| b[1][1] <=> a[1][1] }
dept_arr_sorted_by_data.each do |val|
# Sorted by size - descending. Find the first (largest data) entry where accts <= what we need
next if used_hsh.has_key?(val[0])
# do nothing
if val[1][0] <= avg_accts-this_accts
my_key = val[0]
my_val = val[1]
accts = val[1][0]
data = val[1][1]
break
end
end
end
used_hsh[my_key] = my_val
bucket_hsh[my_key] = my_val
this_accts = this_accts + accts
this_data = this_data + data
accts_space_pct_used = this_accts.to_f / avg_accts * 100
data_space_pct_used = this_data.to_f / avg_data * 100
end
bucket_arr << [this_accts, this_data, bucket_hsh]
end
x=0
while x < bucket_arr.size do
th = bucket_arr[x][2]
list_of_depts = []
th.each_key do |key|
list_of_depts << key
end
ap "Bucket #{x}: #{bucket_arr[x][0]} accounts :: #{bucket_arr[x][1]} data :: #{list_of_depts.size} departments"
#ap list_of_depts
x = x+1
end
...and the results...
"MAX ACCTS: 2279 AVG ACCTS: 379"
"MAX SIZE: 1693315 AVG SIZE: 282219"
"Bucket 0: 379 accounts :: 251670 data :: 7 departments"
"Bucket 1: 379 accounts :: 286747 data :: 10 departments"
"Bucket 2: 379 accounts :: 278226 data :: 14 departments"
"Bucket 3: 379 accounts :: 281292 data :: 19 departments"
"Bucket 4: 379 accounts :: 293777 data :: 28 departments"
"Bucket 5: 379 accounts :: 298675 data :: 78 departments"
(379 * 6 <> 2279) I still need to figure out how to account for when the MAX_ACCTS are not evenly divisible by the number of buckets. I tried adding a 1% pad to the AVG_ACCTS value, which in this case means the average would be 383 I think, but then all the buckets say they have 383 accounts in them...which can't be true because then there are more accounts in the buckets than MAX_ACCTS. I've got a mistake in the code somewhere that I haven't found yet.
This is an example of the knapsack problem. There are a few solutions, but it's a really tricky problem and it's better to research a good solution than to try and make your own.

Resources