Ruby / Redmine: compile a hash from issue subjects and appropriate custom fields - ruby

I’m using Redmine and Computed Custom Field plugin.
The plugin provides a possibility to make custom fields computed and it accepts ruby code for calculations.
In Redmine I have a project (Project_id = 11) where in I calculate the cost of products in a separate custom field for each issue. It looks like this:
Each Issue has a custom field (cf_id = 31) for selecting the product: Pears, Pineapples, Tomatoes, Coconuts.
Each Issue has a custom field (cf_id = 32) for entering the quantity (pieces) of goods.
Each Issue has a custom field (cf_id = 33) for entering the weight (pounds) of goods.
Each Issue has a computed custom field (cf_id = 34) in which the formula calculates the cost of the product.
The formula in the computed custom field (cf_id = 34) includes two hashes with prices of products (depending on the product type):
products_by_weight = {
"Pears" => [110],
"Tomatoes" => [120]
}
products_by_pieces = {
"Pineapples" => [130,300],
"Coconuts" => [140,200]
}
Then my formula checks the product selected in cf_id = 31 for belonging to the first or the second hashes and performs the corresponding calculations:
Multiplies the price by weight (cf_id = 32) in case of using the goods from the first list
Or multiplies the price by the quantity (cf_id = 33) in case of using the goods from the second list. The second value in the value array of "products_by_pieces" hash is the weight limit per piece. If the weight divided by the limit is a larger amount than entered in cf_id = 32, then the formula in scenario 2 will use this quantity instead of the one indicated in cf_id = 32.
Now I`m trying to move these variables outside the formula. I made a project (Project_id = 22) in which I want to save these variables as issues.
I imagine it like this:
The name of the issue is the name of the product
Each issue has two custom fields:
cf_id = 41 is price of product
cf_id = 42 is weight limit per piece
For each issue a category is assigned: "products_by_weight" or "products_by_pieces".
I want to compile the same hashes that are now included in my formula in the cf_id = 34 of the issues of the project 11, but automatically from the issues of the project 22, taking into account the category.
So far, all I have achieved is to find the price of a known product from such issues of the project 22.
price = Project.find(22).issues.where(subject: "Pineapples").first.try(:custom_field_value,41)
But this does not help in any way and requires changes to the code when adding each new product.
I'm new to programming and Ruby, so I’m trying to experiment with Redmine classes, and tried to compile a hash with such code:
Issue.by_category(Project.find(22))
But as a result, so far I have received only this:
[{"status_id"=>"27", "closed"=>true, "category_id"=>"1", "total"=>"10"}]
Which is completely different from the result I expect.
Any help would be helpful!
UPD.
Right now, my variables (product prices and weight limits) are in a hash, which is directly a part of the code for computed field 34. But I do not want these variables (prices and weight limits) to be part of the code. I want to manage them as Issues with corresponding custom fields (41 and 42) in a separate project (22) - in such a way that regular user can change or add these values in Issues without having to change the code of the calculated custom field (34). So I want to compile that hash based on the Issues from Project 22 instead of writing it directly. I assume this is so, that the Subjects of the Issues of the Project 22 should become the keys and array of custom fields [41,41] - the values. In doing so, I need two separate hashes determined by the assigned category ("goods_by_weight" and "goods_by_pieces") because they are calculated differently and in Project 22 I have other variables written as values of custom fields in Issues with a different category.

I solved this problem in the following way.
As planned, now I store the price list for my products as issues in a separate Project (ID 22). To get the prices hash of all products of the selected category (A) in the computed custom field's formula if Issue of Project (ID 11), I do the following:
PRICELIST_PROJECT_ID = 22
CATEGORY_A_ID = 1
PRICE_VALUE_CFID = 41
WLIMIT_VALUE_CFID = 42
delimiter = ','
pricelist_issues_cat_a = Project.find(PRICELIST_PROJECT_ID).issues.select { |rate| rate.category_id == CATEGORY_A_ID }
cata_products_names = []
cata_products_pvalues = []
for i in (0..pricelist_issues_cat_a.size-1) do
cata_products_names[i] = pricelist_issues_cat_a[i].try(:subject)
cata_products_pvalues[i] = pricelist_issues_cat_a[i].try(:custom_field_value,PRICE_VALUE_CFID).split(Regexp.union(delimiter)).map(&:to_f)
end
cata_price_hash = Hash[cata_products_names.zip(cata_products_pvalues)]
And the same way for product category B.
Not sure if this is the most efficient way, but it works for me.

Related

Dynamics crm + Plugin code to store sum formula across a entity collection

I have the below requirement to be implemented in a plugin code on an Entity say 'Entity A'-
Below is the data in 'Entity A'
Record 1 with field values
Price = 100
Quantity = 4
Record 2 with field values
Price = 200
Quantity = 2
I need to do 2 things
Add the values of the fields and update it in a new record
Store the Addition Formula in a different config entity
Example shown below -
Record 3
Price
Price Value = 300
Formula Value = 100 + 200
Quantity
Quantity Value = 6
Formula Value = 4 + 2
Entity A has a button named "Perform Addition" and once clicked this will trigger the plugin code.
Below is the code that i have tried -
AttributeList is the list of fields i need to perform sum on. All fields are decimal
Entity EntityA = new EntityA();
EntityA.Id = new Guid({"Guid String"});
var sourceEntityDataList = service.RetrieveMultiple(new FetchExpression(fetchXml)).Entities;
foreach (var value in AttributeList)
{
EntityA[value]= sourceEntityDataList.Sum(e => e.Contains(value) ? e.GetAttributeValue<Decimal>(value) : 0);
}
service.Update(EntityA);
I would like to know if there is a way through linq I can store the formula without looping?
and if not how can I achieve this?
Any help would be appreciated.
Here are some thoughts:
It's interesting that you're calculating values from multiple records and populating the result onto a sibling record rather than a parent record. This is different than a typical "rollup" calculation.
Dynamics uses the SQL sequential GUID generator to generate its ids. If you're generating GUIDs outside of Dynamics, you might want to look into leveraging the same logic.
Here's an example of how you might refactor your code with LINQ:
var target = new Entity("entitya", new Guid("guid"));
var entities = service.RetrieveMultiple(new FetchExpression(fetchXml)).Entities.ToList();
attributes.ForEach(a => target[a] = entities.Sum(e => e.GetAttributeValue<Decimal>(a));
service.Update(target);
The GetAttributeValue<Decimal>() method defaults to 0, so we can skip the Contains call.
As far as storing the formula on a config entities goes, if you're looking for the capability to store and use any formula, you'll need a full expression parser, along the lines of this calculator example.
Whether you'll be able to do the Reflection required in a sandboxed plugin is another question.
If, however, you have a few set formulas, you can code them all into the plugin and determine which to use at runtime based on the entities' properties and/or config data.

'LpVariable' object does not support indexing

I ran into the error of 'LpVariable' object does not support indexing. Is this due to how my data are being setup for my PuLP optimization?
Basically, I am trying to get the optimized sales by multiplying the replenishment quantity with the selling price for each SKU (UPC). The replenishment quantity will be based on the quantity sold (available in the dataset) and will be the constraint for the optimization problem. I should not replenish way too much than what I sold.
Could someone with PuLP experience please help me with my error? Is the way I setup my For-Loop logical?
Here is the key portion of my Python codes:
After sorting my data based on LocationNumber, the first and last few rows of my data are as follows:
enter image description here
df.drop(['LowlawWeekYear'], axis=1, inplace=True) # Drop the WeekYear column
store_list = {108} # This is LocationNumber. Will only run one store for now
for store_number in sorted(store_list):
specific_store = df[df['LocationNumber'] == store_number]
Qty_Price_df = specific_store.groupby('UPC', as_index = False)['AvgSellingPricewoTax', 'Units'].mean()
# get the mean of the AvgSellingPricewoTax and Units. Ultimately, I only need one row for each UPC with the average values of AvgSellingPricewoTax and Units.
SKU_list = sorted(list(set(Qty_Price_df.UPC))) # List of SKU numbers
Variable_list = dict(zip(Qty_Price_df.UPC, Qty_Price_df.UPC)) # Variables which I am looking to optimize
Price_list = dict(zip(Qty_Price_df.UPC, Qty_Price_df.AvgSellingPricewoTax))
Qty_list = dict(zip(Qty_Price_df.UPC, Qty_Price_df.Units)) # Quantity sold per UPC. This will be used in the constraint.
from pulp import *
optimization = LpProblem("Perfect_Store", LpMaximize)
Variable_list = LpVariable("SKU", lowBound=0) # Continuous by default
# Define objective function
optimization += lpSum([Price_list[type]*Variable_list[type] for type in SKU_list]), "Total Sales by multiplying Price with Variable Qty"
***# Here is where I ran into the error message: 'LpVariable' object does not support indexing***
# Set constraint for each SKU
for c in SKU_list:
optimization += (Qty_list[c] <= Qty_list[c]*1.05), "Constraints for each SKU is the replenishment quantity which should not be more than 5% of the quantity sold"
print("Status:", LpStatus[optimization.status])

How to sum all values for numberDisplay, excluding a category

I have set of data where I want to apply filters by default to a numberDisplay. The data is something like this.
data = [{category:'A',value:10},
{category:'B',value:10},
{category:'C',value:10},
{category:'S',value:10},
{category:'C',value:10},
{category:'A',value:10}]
I am trying to create a number display which will show sum of values other than category 'S', I tried using fake groups but they are failing. What would be the best method to achieve this ?
You don’t need a fake group for this, since you’re not trying to change the shape/structure of the aggregation. Ordinary crossfilter reductions cover this purpose.
You can simply do
cf.groupAll().reduceSum(d => d.category === ‘S’ ? 0 : d.value);
This will sum the value of every row included in the current filters, but will substitute zero if the row’s category is S.

How to design querying multiple tags on analytics database

I would like to store user purchase custom tags on each transaction, example if user bought shoes then tags are "SPORTS", "NIKE", SHOES, COLOUR_BLACK, SIZE_12,..
These tags are that seller interested in querying back to understand the sales.
My idea is when ever new tag comes in create new code(something like hashcode but sequential) for that tag, and code starts from "a-z" 26 letters then "aa, ab, ac...zz" goes on. Now keep all the tags given for in one transaction in the one column called tag (varchar) by separating with "|".
Let us assume mapping is (at application level)
"SPORTS" = a
"TENNIS" = b
"CRICKET" = c
...
...
"NIKE" = z //Brands company
"ADIDAS" = aa
"WOODLAND" = ab
...
...
SHOES = ay
...
...
COLOUR_BLACK = bc
COLOUR_RED = bd
COLOUR_BLUE = be
...
SIZE_12 = cq
...
So storing the above purchase transaction, tag will be like tag="|a|z|ay|bc|cq|" And now allowing seller to search number of SHOES sold by adding WHERE condition tag LIKE %|ay|%. Now the problem is i cannot use index (sort key in redshift db) for "LIKE starts with %". So how to solve this issue, since i might have 100 millions of records? dont want full table scan..
any solution to fix this?
Update_1:
I have not followed bridge table concept (cross-reference table) since I want to perform group by on the results after searching the specified tags. My solution will give only one row when two tags matched in a single transaction, but bridge table will give me two rows? then my sum() will be doubled.
I got suggestion like below
EXISTS (SELECT 1 FROM transaction_tag WHERE tag_id = 'zz' and trans_id
= tr.trans_id) in the WHERE clause once for each tag (note: assumes tr is an alias to the transaction table in the surrounding query)
I have not followed this; since i have to perform AND and OR condition on the tags, example ("SPORTS" AND "ADIDAS") ---- "SHOE" AND ("NIKE" OR "ADIDAS")
Update_2:
I have not followed bitfield, since dont know redshift has this support also I assuming if my system will be going to have minimum of 3500 tags, and allocating one bit for each; which results in 437 bytes for each transaction, though there will be only max of 5 tags can be given for a transaction. Any optimisation here?
Solution_1:
I have thought of adding min (SMALL_INT) and max value (SMALL_INT) along with tags column, and apply index on that.
so something like this
"SPORTS" = a = 1
"TENNIS" = b = 2
"CRICKET" = c = 3
...
...
"NIKE" = z = 26
"ADIDAS" = aa = 27
So my column values are
`tag="|a|z|ay|bc|cq|"` //sorted?
`minTag=1`
`maxTag=95` //for cq
And query for searching shoe(ay=51) is
maxTag <= 51 AND tag LIKE %|ay|%
And query for searching shoe(ay=51) AND SIZE_12 (cq=95) is
minTag >= 51 AND maxTag <= 95 AND tag LIKE %|ay|%|cq|%
Will this give any benefit? Kindly suggest any alternatives.
You can implement auto-tagging while the files get loaded to S3. Tagging at the DB level is too-late in the process. Tedious and involves lot of hard-coding
While loading to S3 tag it using the AWS s3API
example below
aws s3api put-object-tagging --bucket --key --tagging "TagSet=[{Key=Addidas,Value=AY}]"
capture tags dynamically by sending and as a parameter
2.load the tags to dynamodb as a metadata store
3.load data to Redshift using S3 COPY command
You can store tags column as varchar bit mask, i.e. a strictly defined bit sequence of 1s or 0s, so that if a purchase is marked by a tag there will be 1 and if not there will be 0, etc. For every row, you will have a sequence of 0s and 1s that has the same length as the number of tags you have. This sequence is sortable, however you would still need lookup into the middle but you will know at which specific position to look so you don't need like, just substring. For further optimization, you can convert this bit mask to integer values (it will be unique for each sequence) and make matching based on that but AFAIK Redshift doesn't support that yet out of box, you will have to define the rules yourself.
UPD: Looks like the best option here is to keep tags in a separate table and create an ETL process that unwraps tags into tabular structure of order_id, tag_id, distributed by order_id and sorted by tag_id. Optionally, you can create a view that joins the this one with the order table. Then lookups for orders with a particular tag and further aggregations of orders should be efficient. There is no silver bullet for optimizing this in a flat table, at least I don't know of such that would not bring a lot of unnecessary complexity versus "relational" solution.

Google Sheets - Dependent drop-down lists

I am recreating and expanding on a doc I had previously made. I have already brought in the script I had used originally, and tweaked it where I believed appropriate to get it working in this sheet, but I must have missed something. Editable samples of the 3 spreadsheet files involved can be found here. These files are a sample "Price List", "Catalog"(which aggregates manufacturer names from all price lists, and also has a "Catalog" tab for misc items not sold by one of my primary vendors), and "Addendum B" which is the file I require assistance with.
This document is an addendum to my contracts which lists all equipment being sold as part of that contract. It has 2 sheets in it ("Addendum B" and "XREF"), and "Addendum B" has several dependent columns: Vendor, Manufacturer, Model, Description, and Price. Their dependencies are as follows:
Currently Working
Vendor: Basic data validation pulling from XREF!A2:A.
Not working, script in file
Manufacturer: Based on the Vendor selected, should be a drop-down
list generated from the column headed with that vendor's name on
"XREF".
Now here's were it gets tricky beyond what I had previously done.
Model: I want this column to be a drop-down listing all model numbers
associated with that manufacturer, from a completely separate price
list provided to me by my vendor. (I have shared a sample price list which reflects column positions as they appear in all such files.
Description: Displays the corresponding description for the Model selected, from the price list selected in the Vendor column.
Price: Displays the corresponding markup price for the Model selected, from the price list selected in the Vendor column.
And that about summarizes my goals and what I'm struggling with.
So I looked into your script file in sheet Addendum B.
I have made few edits and it should be working now, the modified code:
function onEdit()
{
var ss = SpreadsheetApp.getActiveSpreadsheet(),
sheet = ss.getActiveSheet(),
name = sheet.getName();
if (name != 'Addendum B') return;
var range = sheet.getActiveRange(),
col = range.getColumn();
if (col != 6) return; //You col was set to 5 changed it to 6!
var val = range.getValue(),
dv = ss.getSheetByName('XREF'),
data = dv.getDataRange().getValues(),
catCol = data[0].indexOf(val),
list = [];
Logger.log(catCol)
for (var i = 1, len = 100; i < len; i++) // Problem is here, you have too many items in list! Cannot have more 500 items for validation
list.push(data[i][catCol]);
var listRange = dv.getRange(2,catCol +1,dv.getLastRow() - 1, 1)
Logger.log(list)
var cell = sheet.getRange(range.getRow(), col-1)
var rule = SpreadsheetApp.newDataValidation()
.requireValueInRange(listRange) // Use requireValueIn Range instead to fix the problem
.build();
cell.setDataValidation(rule);
Logger.log(cell.getRow())
}
The reason your validation was not working was you had more than 500 items in your data validation list. I just modified it to take the same values from range instead. Hope you find that helpful!
Now for the remaining 3 questions, here are my comments and thoughts on it:
1) I didn't find any code related to the problem you mentioned in your question. So, I am gonna assume you are asking for general ideas on how to achieve this?
2) You basically approach the problem the same as you did with the above code! Once a manufacturer is selected, the script looks for that manufacturer in the sheet and update the Data validation in the corresponding model column.
You will modify the code like so
var ss = SpreadsheetApp.openById("1nbCJOkpIQxnn71sJPj6X4KaahROP5cMg1SI9xIeJdvY")
//The above code with select the catalog sheet.
dv = ss.getSheetByName('Misc_Catalog')
//The above code will open the Misc_Catalog tab.
3) A better approach would be to use Sidebar/Dialog Box to validate your input then add it to the sheet at the end. (Looks Cleaner and also prevents unnecessary on edit trigger in the sheet, which can take a while to update.)
You find more details here: https://developers.google.com/apps-script/guides/dialogs

Resources