I have two related one to many entities
Race and Cars (on race contains a lot of cars)
I need to generate an json result to pass it to jQGrid, i thought may be it is possible to do that without creating new class witch would contain properties. I thought I can go like that:
var jsonData = new
{
total = totalPages,
page = page,
records = totalRecords,
rows = (from c in Races
select new
{
//c.Cars.Id.ToString(), - need iteration
cell = new string[] {
//c.Cars.Id.ToString(), - need iteration
c.Date.ToString(),
c.Type.ToString(),
c.Cars //But how i may loop all Cars colection here?
//c.Cars.Name - need iteration
//c.Cars.Speed - need iteration
}
}).ToArray()
};
But the Cars property represent collection. How may i iterate that inside collection initializer? Or should i better create class witch would contain all the properties i need?
Any ideas?
Lets say Car has properties Name Speed Id and Race has properties Date, Type
The data will be displayed like that:
Date | Type | Id | Name | Speed
02/03/2011 | A | 1 | MegaName1 | 130
02/03/2011 | A | 2 | MegaName2 | 112
02/03/2011 | A | 3 | MegaName3 | 132
03/05/2011 | B | 4 | MegaName2 | 112
03/05/2011 | B | 5 | MegaName4 | 33
Try the following:
var jsonData = new
{
total = totalPages,
page = page,
records = totalRecords,
rows =
(from race in races
from car in race.Cars
select new
{
cell = new string[]
{
race.Date.ToString(),
race.Type,
car.Id,
car.Name,
car.Speed.ToString()
}
}).ToArray()
};
Related
I'm trying to make some query with Eloquent but cannot get the result I really want. Using Laravel 6.
I have the next tables:
Table Users:
id | user
============
1 | Andy
Table Colors:
id | color
============
1 | red
2 | blue
3 | white
4 | green
5 | black
Table user_colors:
id | user_id | color_id
==========================
1 | 1 | 1
1 | 1 | 4
What I want is something to get all the colors, but mark as active the ones in the pivot. Something like this:
id | color | active
====================
1 | red | 1
2 | blue
3 | white
4 | green | 1
5 | black
Any idea?
This should return a collection of Color objects with the one's associated with the given user given a property of active set to true
$user_colors = User::find(1)->colors;
$colors = Colors::all()->transform(function ($color) use ($user_colors) {
if ($user_colors->contain($color)) {
$color->active = true;
}
return $color;
});
Whilst this should achieve the result you need, without knowing the context in which you want to use it, it may be inefficient.
There are several ways to achieve this task. One of them could be to use appends in models.
Appends are attributes that you can dynamically add to your model class.
E.g. you can edit your Color model like this:
class Color extends Model {
// Your code...
protected $appends = ['active'];
public function getActiveAttribute() {
$is_active = UserColor::whereColorId($this->id)->first();
if (!is_null($is_active)) return true; // Or 1, up to you
return false; // Or 0, up to you
}
}
If you use this method, you can access to $color->active attribute.
Note that this is only one of the different methods that you can use to do this task.
Reference: Laravel Appends Documentation
I have following data
start stop status
+-----------+-----------+-----------+
| 09:01:10 | 09:01:40 | active |
| 09:02:30 | 09:04:50 | active |
| 09:10:01 | 09:11:50 | active |
+-----------+-----------+-----------+
I want to fill in the gaps with "passive"
start stop status
+-----------+-----------+-----------+
| 09:01:10 | 09:01:40 | active |
| 09:01:40 | 09:02:30 | passive |
| 09:02:30 | 09:04:50 | active |
| 09:04:50 | 09:10:01 | passive |
| 09:10:01 | 09:11:50 | active |
+-----------+-----------+-----------+
How can I do this in M Query language?
You could try something like the below (my first two steps someTable and changedTypes are just to re-create your sample data on my end):
let
someTable = Table.FromColumns({{"09:01:10", "09:02:30", "09:10:01"}, {"09:01:40", "09:04:50", "09:11:50"}, {"active", "active", "active"}}, {"start","stop","status"}),
changedTypes = Table.TransformColumnTypes(someTable, {{"start", type duration}, {"stop", type duration}, {"status", type text}}),
listOfRecords = Table.ToRecords(changedTypes),
transformList = List.Accumulate(List.Skip(List.Positions(listOfRecords)), {listOfRecords{0}}, (listState, currentIndex) =>
let
previousRecord = listOfRecords{currentIndex-1},
currentRecord = listOfRecords{currentIndex},
thereIsAGap = currentRecord[start] <> previousRecord[stop],
recordsToAdd = if thereIsAGap then {[start=previousRecord[stop], stop=currentRecord[start], status="passive"], currentRecord} else {currentRecord},
append = listState & recordsToAdd
in
append
),
backToTable = Table.FromRecords(transformList, type table [start=duration, stop=duration, status=text])
in
backToTable
This is what I start off with (at the changedTypes step):
This is what I end up with:
To integrate with your existing M code, you'll probably need to:
remove someTable and changedTypes from my code (and replace with your existing query)
change changedTypes in the listOfRecords step to whatever your last step is called (otherwise you'll get an error if you don't have a changedTypes expression in your code).
Edit:
Further to my answer, what I would suggest is:
Try changing this line in the code above:
listOfRecords = Table.ToRecords(changedTypes),
to
listOfRecords = List.Buffer(Table.ToRecords(changedTypes)),
I found that storing the list in memory reduced my refresh time significantly (maybe ~90% if quantified). I imagine there are limits and drawbacks (e.g. if the list can't fit), but might be okay for your use case.
Do you experience similar behaviour? Also, my basic graph indicates non-linear complexity of the code overall unfortunately.
Final note: I found that generating and processing 100k rows resulted in a stack overflow whilst refreshing the query (this might have been due to the generation of input rows and may not the insertion of new rows, don't know). So clearly, this approach has limits.
I think I may have a better performing solution.
From your source table (assuming it's sorted), add an index column starting from 0 and an index column starting from 1 and then merge the table with itself doing a left outer join on the index columns and expand the start column.
Remove columns except for stop, status, and start.1 and filter out nulls.
Rename columns to start, status, and stop and replace "active" with "passive".
Finally, append this table to your original table.
let
Source = Table.RenameColumns(#"Removed Columns",{{"Column1.2", "start"}, {"Column1.3", "stop"}, {"Column1.4", "status"}}),
Add1Index = Table.AddIndexColumn(Source, "Index", 1, 1),
Add0Index = Table.AddIndexColumn(Add1Index, "Index.1", 0, 1),
SelfMerge = Table.NestedJoin(Add0Index,{"Index"},Add0Index,{"Index.1"},"Added Index1",JoinKind.LeftOuter),
ExpandStart1 = Table.ExpandTableColumn(SelfMerge, "Added Index1", {"start"}, {"start.1"}),
RemoveCols = Table.RemoveColumns(ExpandStart1,{"start", "Index", "Index.1"}),
FilterNulls = Table.SelectRows(RemoveCols, each ([start.1] <> null)),
RenameCols = Table.RenameColumns(FilterNulls,{{"stop", "start"}, {"start.1", "stop"}}),
ActiveToPassive = Table.ReplaceValue(RenameCols,"active","passive",Replacer.ReplaceText,{"status"}),
AppendQuery = Table.Combine({Source, ActiveToPassive}),
#"Sorted Rows" = Table.Sort(AppendQuery,{{"start", Order.Ascending}})
in
#"Sorted Rows"
This should be O(n) complexity with similar logic to #chillin, but I think should be faster than using a custom function since it will be using a built-in merge which is likely to be highly optimized.
I would approach this as follows:
Duplicate the first table.
Replace "active" with "passive".
Remove the start column.
Rename stop to start.
Create a new stop column by looking up the earliest start time from your original table that occurs after the current stop time.
Filter out nulls in this new column.
Append this table to the original table.
The M code will look something like this:
let
Source = <...your starting table...>
PassiveStatus = Table.ReplaceValue(Source,"active","passive",Replacer.ReplaceText,{"status"}),
RemoveStart = Table.RemoveColumns(PassiveStatus,{"start"}),
RenameStart = Table.RenameColumns(RemoveStart,{{"stop", "start"}}),
AddStop = Table.AddColumn(RenameStart, "stop", (C) => List.Min(List.Select(Source[start], each _ > C[start])), type time),
RemoveNulls = Table.SelectRows(AddStop, each ([stop] <> null)),
CombineTables = Table.Combine({Source, RemoveNulls}),
#"Sorted Rows" = Table.Sort(CombineTables,{{"start", Order.Ascending}})
in
#"Sorted Rows"
The only tricky bit above is the custom column part where I define the new column like this:
(C) => List.Min(List.Select(Source[start], each _ > C[start]))
This takes each item in the column/list Source[start] and compares it to the time in the current row. It selects only the ones that occur after the time in the current row and then take the min over that list to find the earliest one.
This question arise from some difficulties in creating a crossfilter dataset, in particular on how to group the different dimension and compute a derived values. The final aim is to have a number of dc.js graphs using the dimensions and groups.
(Fiddle example https://jsfiddle.net/raino01r/0vjtqsjL/)
Question
Before going on with the explanation of the setting, the key question is the following:
How to create custom add, remove, init, functions to pass in .reduce so that the first two do not sum multiple times the same feature?
Data
Let's say I want to monitor the failure rate of a number of machines (just an example). I do this using different dimension: month, machine's location, and type of failure.
For example I have the data in the following form:
| month | room | failureType | failCount | machineCount |
|---------|------|-------------|-----------|--------------|
| 2015-01 | 1 | A | 10 | 5 |
| 2015-01 | 1 | B | 2 | 5 |
| 2015-01 | 2 | A | 0 | 3 |
| 2015-01 | 2 | B | 1 | 3 |
| 2015-02 | . | . | . | . |
Expected
For the three given dimensions, I should have:
month_1_rate = $\frac{10+2+0+1}{5+3}$;
room_1_rate = $\frac{10+2}{5}$;
type_A_rate = $\frac{10+0}{5+3}$.
Idea
Essentially, what counts in this setting is the couple (day, room). I.e. given a day and a room there should be a rate attached to them (then the crossfilter should act to take in account the other filters).
Therefore, a way to go could be to store the couples that have already been used and do not sum machineCount for them - however we still want to update the failCount value.
Attempt (failing)
My attempt was to create custom reduce functions and not summing MachineCount that were already taken into account.
However there are some unexpected behaviours. I'm sure this is not the way to go - so I hope to have some suggestion on this.
// A dimension is one of:
// ndx = crossfilter(data);
// ndx.dimension(function(d){return d.month;})
// ndx.dimension(function(d){return d.room;})
// ndx.dimension(function(d){return d.failureType;})
// Goal: have a general way to get the group given the dimension:
function get_group(dim){
return dim.group().reduce(add_rate, remove_rate, initial_rate);
}
// month is given as datetime object
var monthNameFormat = d3.time.format("%Y-%m");
//
function check_done(p, v){
return p.done.indexOf(v.room+'_'+monthNameFormat(v.month))==-1;
}
// The three functions needed for the custom `.reduce` block.
function add_rate(p, v){
var index = check_done(p, v);
if (index) p.done.push(v.room+'_'+monthNameFormat(v.month));
var count_to_sum = (index)? v.machineCount:0;
p.mach_count += count_to_sum;
p.fail_count += v.failCount;
p.rate = (p.mach_count==0) ? 0 : p.fail_count*1000/p.mach_count;
return p;
}
function remove_rate(p, v){
var index = check_done(p, v);
var count_to_subtract = (index)? v.machineCount:0;
if (index) p.done.push(v.room+'_'+monthNameFormat(v.month));
p.mach_count -= count_to_subtract;
p.fail_count -= v.failCount;
p.rate = (p.mach_count==0) ? 0 : p.fail_count*1000/p.mach_count;
return p;
}
function initial_rate(){
return {rate: 0, mach_count:0, fail_count:0, done: new Array()};
}
Connection with dc.js
As mentioned, the previous code is needed to create dimension, group to be passed in three different bar graphs using dc.js.
Each graph will have .valueAccessor(function(d){return d.value.rate};).
See the jsfiddle (https://jsfiddle.net/raino01r/0vjtqsjL/), for an implementation. Different numbers, but the datastructure is the same. Notice the in the fiddle you expect a Machine count to be 18 (in both months), however you always get the double (because of the 2 different locations).
Edit
Reduction + dc.js
Following Ethan Jewett answer, I used reductio to take care of the grouping. The updated fiddle is here https://jsfiddle.net/raino01r/dpa3vv69/
My reducer object needs two exception (month, room), when summing the machineCount values. Hence it is built as follows:
var reducer = reductio()
reducer.value('mach_count')
.exception(function(d) { return d.room; })
.exception(function(d) { return d.month; })
.exceptionSum(function(d) { return d.machineCount; })
reducer.value('fail_count')
.sum(function(d) { return d.failCount; })
This seems to fix the numbers when the graphs are rendered.
However, I do have a strange behaviour when filtering one single month and looking at the numbers in the type graph.
Possible solution
Rather double create two exception, I could merge the two fields when processing the data. I.e. as soon the data is defined I couls:
data.foreach(function(x){
x['room_month'] = x['room'] + '_' + x['month'];
})
Then the above reduction code should become:
var reducer = reductio()
reducer.value('mach_count')
.exception(function(d) { return d.room_month; })
.exceptionSum(function(d) { return d.machineCount; })
reducer.value('fail_count')
.sum(function(d) { return d.failCount; })
This solution seems to work. However I am not sure if this is a sensible things to do: if the dataset is large,adding a new feature could slow down things quite a lot!
A few things:
Don't calculate rates in your Crossfilter reducers. Calculate the components of the rates. This will keep both simpler and faster. Do the actual division in your value accessor.
You've basically got the right idea. I think there are two problems that I see immediately:
In your remove_rate your are not removing the key from the p.done array. You should be doing something like if (index) p.done.splice(p.done.indexOf(v.room+'_'+monthNameFormat(v.month)), 1); to remove it.
In your reduce functions, index is a boolean. (index == -1) will never evaluate to true, IIRC. So your added machine count will always be 0. Use var count_to_sum = index ? v.machineCount:0; instead.
If you want to put together a working example, I or someone else will be happy to get it going for you, I'm sure.
You may also want to try Reductio. Crossfilter reducers are difficult to do right and efficiently, so it may make sense to use a library to help. With Reductio, creating a group that calculates your machine count and failure count looks like this:
var reducer = reductio()
reducer.value('mach_count')
.exception(function(d) { return d.room; })
.exceptionSum(function(d) { return d.machineCount; })
reducer.value('fail_count')
.sum(function(d) { return d.failCount; })
var dim = ndx.dimension(...)
var grp = dim.group()
reducer(group)
I am trying to make a table store 3 parts which will each be huge in length. The first is the name, second is EID, third is SID. I want to be able to get the information like this name[1] gives me the first name in the list of names, and like so for the other two. I'm running into problems with how to do this because it seems like everyone has their own way which are all very very different from one another. right now this is what I have.
info = {
{name = "btest", EID = "19867", SID = "664"},
{name = "btest1", EID = "19867", SID = "664"},
{name = "btest2", EID = "19867", SID = "664"},
{name = "btest3", EID = "19867", SID = "664"},
}
Theoretically speaking would i be able to just say info.name[1]? Or how else would I be able to arrange the table so I can access each part separately?
There are two main "ways" of storing the data:
Horizontal partitioning (Object-oriented)
Store each row of the data in a table. All tables must have the same fields.
Advantages: Each table contains related data, so it's easier passing it around (e.g, f(info[5])).
Disadvantages: A table is to be created for each element, adding some overhead.
This looks exactly like your example:
info = {
{name = "btest", EID = "19867", SID = "664"},
-- etc ...
}
print(info[2].names) -- access second name
Vertical partioning (Array-oriented)
Store each property in a table. All tables must have the same length.
Advantages: Less tables overall, and slightly more time and space efficient (Lua VM uses actual arrays).
Disadvantages: Needs two objects to refer to a row: the table and the index. It's harder to insert/delete.
Your example would look like this:
info = {
names = { "btest", "btest1", "btest2", "btest3", },
EID = { "19867", "19867", "19867", "19867", },
SID = { "664", "664", "664", "664", },
}
print(info.names[2]) -- access second name
So which one should I choose?
Unless you are really need performance, you should go with horizontal partitioning. It's far more common working over full rows, and gives you more freedom in how you use your structures. If you decide to go full OO, having your data in horizontal form will be much easier.
Addendum
The names "horizontal" and "vertical" come from the table representation of a relational database.
| names | EID | SID | | names |
--+-------+-----+-----+ +-------+
1 | | | | | | --+-------+-----+-----+
2 | | | | | | 2 | | | |
3 | | | | | | --+-------+-----+-----+
Your info table is an array, so you can access items using info[N] where N is any number from 1 to the number of items in the table. Each field of the info table is itself a table. The 2nd item of info is info[2], so the name field of that item is info[2].name.
I want to read excel 2003( cannot change as its coming from third party) and group data in List or Dictionary (I don't which one is good)
for example below (Excel formatting )
Books Data [first row and first column in excel]
second row( no records)
Code,Name,IBN [third row (second column, third column]
Aust [fourth row, first column]
UX test1 34 [ fifth row (second column, third column]
......
....
Books Data
Code Name IBN
Aust
UX test1 34
UZ test2 345
UN test3 5654
US
UX name1 567
TG nam2 123
UM name3 234
I am reading excel data using following code( some help from Google)
string filename = #"C:\\" + "Book1.xls";
string connectionString = "Provider=Microsoft.Jet.OLEDB.4.0;" +
"Data Source=" + filename + ";" +
"Extended Properties=Excel 8.0;";
OleDbDataAdapter dataAdapter = new OleDbDataAdapter("SELECT * FROM [Sheet1$]", connectionString);
DataSet myDataSet = new DataSet();
dataAdapter.Fill(myDataSet, "BookInfo");
DataTable dataTable = myDataSet.Tables["BookInfo"];
var rows = from p in dataTable.AsEnumerable()
where p[0].ToString() != null || p[0].ToString() != "" && p.Field<string>("F2") != null
select new
{ countryName= p[0],
bookCode= p.Field<string>("F2"),
bookName= p.Field<string>("F3")
};
The code above is not good as to get the “Code” I am using “ F2” and for country I am using p[0].What should I use to get the code and name for each country.
Also it’s give the information I want but I don't how to put in list or dictionary or in class so I can get data by passing parameter as a country name.
In short first it must put all data in list or dictionary and then you can call list or dictionary get data filter by country.
Thanks
There's two things you need to do:
First, you need to reformat the spreadsheet to have the column headers on the first row like the table below shows
| Country | Code | Name | IBN |
|---------|------|---------|------|
| Aust | UX | test1 | 34 |
| Aust | UZ | test2 | 345 |
| Aust | UN | test3 | 5654 |
| US | UX | name1 | 567 |
| US | TG | name2 | 123 |
| US | UM | name3 | 234 |
Second, use the Linq to Excel library to retrieve the data. It takes care of making the oledb connection and creating the sql for you. Below is an example of how easy it is to use the library
var book = new ExcelQueryFactory("pathToExcelFile");
var australia = from x in book.Worksheet()
where x["Country"] == "Aust"
select new
{
Country = x["Country"],
BookCode = x["Code"],
BookName = x["Name"]
};
Checkout the Linq to Excel intro video for more information about the open source project.
Suggestion 1
Checkout THIS link......as AKofC suggests, creating a class to hold your data would be your first port of call. The link I have posted has a small example of the sort of idea we are proposing.
Suggestion 2 with example...
The obvious thing to do from the code you have posted would be to create a new class to store your book information in.
Then you simply define which fields from your excel document it is that you want to pass into the new instance of your bookinformation class.
New Book Information Class:
class MyBookInfo
{
public string CountryName { get; set; }
public string BookCode { get; set; }
public string BookName { get; set; }
}
Method To Retrieve Info:
public void GetMyBookInfoFromExcelDocument()
{
string filename = #"C:\\" + "Book1.xls";
string connectionString = "Provider=Microsoft.Jet.OLEDB.4.0;" +
"Data Source=" + filename + ";" +
"Extended Properties=Excel 8.0;";
OleDbDataAdapter dataAdapter = new OleDbDataAdapter("SELECT * FROM [Sheet1$]", connectionString);
DataSet myDataSet = new DataSet();
dataAdapter.Fill(myDataSet, "BookInfo");
DataTable dataTable = myDataSet.Tables["BookInfo"];
var rows = from p in dataTable.AsEnumerable()
where p[0].ToString() != null || p[0].ToString() != "" && p.Field<string>("F2") != null
select new MyBookInfo
{
CountryName = p.Field<string>("InsertFieldNameHere"),
BookCode = p.Field<string>("InsertFieldNameHere"),
BookName = p.Field<string>("InsertFieldNameHere")
};
}
From what I understand, I suggest creating a BookData class containing the properties you need, in this case Country, Code, Name, and IBN.
Then once you've filled your DataSet with the Excel stuff, create a new List, and loop through the DataRows in the DataSet adding the Excel values to the List.
Then you can use Linq on the List like so:
List<BookData> results = from books in bookList
where books.country == 'US'
select books;
Or something like that. I don't have Visual Studio on me, and Intellisense has spoiled me, so yeah. >__>