Lets say I have a table like so:
{
value = 4
},
{
value = 3
},
{
value = 1
},
{
value = 2
}
and I want to iterate over this and print the value in order so the output is like so:
1
2
3
4
How do I do this, I understand how to use ipairs and pairs, and table.sort, but that only works if using table.insert and the key is valid, I need to be loop over this in order of the value.
I tried a custom function but it simply printed them in the incorrect order.
I have tried:
Creating an index and looping that
Sorting the table (throws error: attempt to perform __lt on table and table)
And a combination of sorts, indexes and other tables that not only didn't work, but also made it very complicated.
I am well and truly stumped.
Sorting the table
This was the right solution.
(throws error: attempt to perform __lt on table and table)
Sounds like you tried to use a < b.
For Lua to be able to sort values, it has to know how to compare them. It knows how to compare numbers and strings, but by default it has idea how to compare two tables. Consider this:
local people = {
{ name = 'fred', age = 43 },
{ name = 'ted', age = 31 },
{ name = 'ned', age = 12 },
}
If I call sort on people, how can Lua know what I intend? I doesn't know what 'age' or 'name' means or which I'd want to use for comparison. I have to tell it.
It's possible to add a metatable to a table which tells Lua what the < operator means for a table, but you can also supply sort with a callback function that tells it how to compare two objects.
You supply sort with a function that receives two values and you return whether the first is "less than" the second, using your knowledge of the tables. In the case of your tables:
table.sort(t, function(a,b) return a.value < b.value end)
for i,entry in ipairs(t) do
print(i,entry.value)
end
If you want to leave the original table unchanged, you could create a custom 'sort by value' iterator like this:
local function valueSort(a,b)
return a.value < b.value;
end
function sortByValue( tbl ) -- use as iterator
-- build new table to sort
local sorted = {};
for i,v in ipairs( tbl ) do sorted[i] = v end;
-- sort new table
table.sort( sorted, valueSort );
-- return iterator
return ipairs( sorted );
end
When sortByValue() is called, it clones tbl to a new sorted table, and then sorts the sorted table. It then hands the sorted table over to ipairs(), and ipairs outputs the iterator to be used by the for loop.
To use:
for i,v in sortByValue( myTable ) do
print(v)
end
While this ensures your original table remains unaltered, it has the downside that each time you do an iteration the iterator has to clone myTable to make a new sorted table, and then table.sort that sorted table.
If performance is vital, you can greatly speed things up by 'caching' the work done by the sortByValue() iterator. Updated code:
local resort, sorted = true;
local function valueSort(a,b)
return a.value < b.value;
end
function sortByValue( tbl ) -- use as iterator
if not sorted then -- rebuild sorted table
sorted = {};
for i,v in ipairs( tbl ) do sorted[i] = v end;
resort = true;
end
if resort then -- sort the 'sorted' table
table.sort( sorted, valueSort );
resort = false;
end
-- return iterator
return ipairs( sorted );
end
Each time you add or remove an element to/from myTable set sorted = nil. This lets the iterator know it needs to rebuild the sorted table (and also re-sort it).
Each time you update a value property within one of the nested tables, set resort = true. This lets the iterator know it has to do a table.sort.
Now, when you use the iterator, it will try and re-use the previous sorted results from the cached sorted table.
If it can't find the sorted table (eg. on first use of the iterator, or because you set sorted = nil to force a rebuild) it will rebuild it. If it sees it needs to resort (eg. on first use, or if the sorted table was rebuilt, or if you set resort = true) then it will resort the sorted table.
Related
I'm pulling data from a third party api. The api runs multiple times in a day. So, if the same data is present in the table it should ignore that record, else if there are any changes it should update that record or insert a new record if anything new shows up in the json received.
I'm using the below code for inserting any new data.
var input = JsonConvert.DeserializeObject<List<DeserializeLookup>>(resultJson).ToList();
var entryset = input.Select(y => new Lookup
{
lookupType = "JOBCODE",
code = y.Code,
description = y.Description,
isNew = true,
lastUpdatedDate = DateTime.UtcNow
}).ToList();
await _context.Lookup.AddRangeAsync(entryset);
await _context.SaveChangesAsync();
But, after the first run, when the api runs again it's again inserting the same data in the table. As a result, duplicate entries are getting into table. To handle the same, I used a foreach loop as below before inserting data to the table.
foreach (var item in input)
{
if (!_context.Lookup.Any(r =>
r.code== item.Code))
{
//above insert code
}
}
But, the same doesn't work as expected. Also, the api takes a lot of time to run when I put a foreach loop. Is there a solution to this in .net core 3.1
List<DeserializeLookup> newList=new();
foreach (var item in input)
{
if (!_context.Lookup.Any(r =>
r.code== item.Code))
{
newList.add(item);
//above insert code
}
}
await _context.Lookup.AddRangeAsync(newList);
await _context.SaveChangesAsync();
It will be better if you try this way
I’m on my phone so forgive me for not being able to format the code in my response. The solution to your problem is something I actually just encountered myself while syncing data from an azure function and third party app and into a sql database.
Depending on your table schema, you would need one column with a unique identifier. Make this column a primary key (first step to preventing duplicates). Here’s a resource for that: https://www.w3schools.com/sql/sql_primarykey.ASP
The next step you want to take care of is your stored procedure. You’ll need to perform what’s commonly referred to as an UPSERT. To do this you’ll need to merge a table with the incoming data...on a specified column (whichever is your primary key).
That would look something like this:
MERGE
Table_1 AS T1
USING
Incoming_Data AS source
ON
T1.column1 = source.column1
/// you can use an AND / OR operator in here for matching on additional values or combinations
WHEN MATCHED THEN
UPDATE SET T1.column2= source.column2
//// etc for more columns
WHEN NOT MATCHED THEN
INSERT (column1, column2, column3) VALUES (source.column1, source.column2, source.column3);
First of all, you should decouple the format in which you get your data from your actual data handling. In your case: get rid of the JSon before you actually interpret the data.
Alas, I haven't got a clue what your data represents, so Let's assume your data is a sequence of Customer Orders. When you get new data, you want to Add all new orders, and you want to update changed orders.
So somewhere you have a method with input your json data, and as output a sequence of Orders:
IEnumerable<Order> InterpretJsonData(string jsonData)
{
...
}
You know Json better than I do, besides this conversion is a bit beside your question.
You wrote:
So, if the same data is present in the table it should ignore that record, else if there are any changes it should update that record or insert a new record
You need an Equality Comparer
To detect whether there are Added or Changed Customer Orders, you need something to detect whether Order A equals Order B. There must be at least one unique field by which you can identify an Order, even if all other values are of the Order are changed.
This unique value is usually called the primary key, or the Id. I assume your Orders have an Id.
So if your new Order data contains an Id that was not available before, then you are certain that the Order was Added.
If your new Order data has an Id that was already in previously processed Orders, then you have to check the other values to detect whether it was changed.
For this you need Equality comparers: one that says that two Orders are equal if they have the same Id, and one that says checks all values for equality.
A standard pattern is to derive your comparer from class EqualityComparer<Order>
class OrderComparer : EqualityComparer<Order>
{
public static IEqualityComparer<Order> ByValue = new OrderComparer();
... // TODO implement
}
Fist I'll show you how to use this to detect additions and changes, then I'll show you how to implement it.
Somewhere you have access to the already processed Orders:
IEnumerable<Order> GetProcessedOrders() {...}
var jsondata = FetchNewJsonOrderData();
// convert the jsonData into a sequence of Orders
IEnumerable<Order> orders = this.InterpretJsonData(jsondata);
To detect which Orders are added or changed, you could make a Dictonary of the already Processed orders and check the orders one-by-one if they are changed:
IEqualityComparer<Order> comparer = OrderComparer.ByValue;
Dictionary<int, Order> processedOrders = this.GetProcessedOrders()
.ToDictionary(order => order.Id);
foreach (Order order in Orders)
{
if(processedOrders.TryGetValue(order.Id, out Order originalOrder)
{
// order already existed. Is it changed?
if(!comparer.Equals(order, originalOrder))
{
// unequal!
this.ProcessChangedOrder(order);
// remember the changed values of this Order
processedOrder[order.Id] = Order;
}
// else: no changes, nothing to do
}
else
{
// Added!
this.ProcessAddedOrder(order);
processedOrder.Add(order.Id, order);
}
}
Immediately after Processing the changed / added order, I remember the new value, because the same Order might be changed again.
If you want this in a LINQ fashion, you have to GroupJoin the Orders with the ProcessedOrders, to get "Orders with their zero or more Previously processed Orders" (there will probably be zero or one Previously processed order).
var ordersWithTPreviouslyProcessedOrder = orders.GroupJoin(this.GetProcessedOrders(),
order => order.Id, // from every Order take the Id
processedOrder => processedOrder.Id, // from every previously processed Order take the Id
// parameter resultSelector: from every Order, with its zero or more previously
// processed Orders make one new:
(order, previouslyProcessedOrders) => new
{
Order = order,
ProcessedOrder = previouslyProcessedOrders.FirstOrDefault(),
})
.ToList();
I use GroupJoin instead of Join, because this way I also get the "Orders that have no previously processed orders" (= new orders). If you would use a simple Join, you would not get them.
I do a ToList, so that in the next statements the group join is not done twice:
var addedOrders = ordersWithTPreviouslyProcessedOrder
.Where(orderCombi => orderCombi.ProcessedOrder == null);
var changedOrders = ordersWithTPreviouslyProcessedOrder
.Where(orderCombi => !comparer.Equals(orderCombi.Order, orderCombi.PreviousOrder);
Implementation of "Compare by Value"
// equal if all values equal
protected override bool Equals(bool x, bool y)
{
if (x == null) return y == null; // true if both null, false if x null but y not null
if (y == null) return false; // because x not null
if (Object.ReferenceEquals(x, y) return true;
if (x.GetType() != y.GetType()) return false;
// compare all properties one by one:
return x.Id == y.Id
&& x.Date == y.Date
&& ...
}
For GetHashCode is one rule: if X equals Y then they must have the same hash code. If not equal, then there is no rule, but it is more efficient for lookups if they have different hash codes. Make a tradeoff between calculation speed and hash code uniqueness.
In this case: If two Orders are equal, then I am certain that they have the same Id. For speed I don't check the other properties.
protected override int GetHashCode(Order x)
{
if (x == null)
return 34339d98; // just a hash code for all null Orders
else
return x.Id.GetHashCode();
}
Disclaimer: This is Glua (Lua used by Garry's Mod)
I just need to compare tables between them and return the difference, like if I was substrating them.
TableOne = {thing = "bob", 89 = 1, 654654 = {"hi"}} --Around 3k items like that
TableTwo = {thing = "bob", 654654 = "hi"} --Same, around 3k
function table.GetDifference(t1, t2)
local diff = {}
for k, dat in pairs(t1) do --Loop through the biggest table
if(!table.HasValue(t2, t1[k])) then --Checking if t2 hasn't the value
table.insert(diff, t1[k]) --Insert the value in the difference table
print(t1[k])
end
end
return diff
end
if table.Count(t1) != table.Count(t2) then --Check if amount is equal, in my use I don't need to check if they are exact.
PrintTable(table.GetDifference(t1, t2)) --Print the difference.
end
My problem being that with only one of difference between the two tables, this returns me more than 200 items. The only item I added was a String. I tried many other functions like this one but they usually cause a stack overflow error because of the table's length.
Your problem is with this line
if(!table.HasValue(t2, t1[k])) then --Checking if t2 hasn't the value
Change it to this:
if(!table.HasValue(t2, k) or t1[k] != t2[k]) then --Checking if t2[k] matches
Right now what is happening is that you're looking at an entry like thing = "bob" and then you're looking to see whether t2 has "bob" as a key. And it doesn't. But neither did t1 so that shouldn't be regarded as a difference.
I wonder if we can just order one column in sas and keep the same order for the other variables.
usually, we use
proc sort
with "by" but this change the order of all variables according to the variable used in "by".
Thank you for help
Create the sorted column as a new dataset and then merge it back onto the data.
proc sort data=have (keep=COLUMN) out=COLUMN ;
by COLUMN;
run;
data want ;
merge have COLUMN;
* no BY statement ;
run;
You can do this in a data step by using hash methods - e.g. to reverse the order of the name column in sashelp.class while keeping the other columns in the same order:
data class;
/*Set up an ordered hash object + iterator to hold the columns we want to sort*/
if 0 then set sashelp.class(keep = name);
declare hash h(ordered:'d');
rc = h.definekey('name','_n_');
rc = h.definedata('name');
rc = h.definedone();
declare hiter hi('h');
/*Populate the hash object, using _n_ as an extra key to prevent deduplication*/
do _n_ = 1 by 1 until(eof1);
set sashelp.class(keep = name) end = eof1;
rc = h.add();
end;
/*Read in the columns in the desired order using the hash iterator*/
do until(eof2);
set sashelp.class end = eof2;
rc = hi.next();
output;
drop rc;
end;
run;
This assumes that you have sufficient memory to hold the columns being sorted.
You can't do it with PROC SORT. You will need to split the data into sorted and non-sorted datasets, then merge then back together one-to-one without "by" statement using a data step. http://support.sas.com/documentation/cdl/en/basess/58133/HTML/default/viewer.htm#a001318478.htm
Regards,
Vasilij
I'm trying to make a function to sort a table after a value inside it. Is there no functions for this already in lua? I can't seem to find one.
local table2 = {};
for i, v in pairs(table) do
if( table[i].field > table[i+1].field ) then
this is how far I got before I thought that it wouldn't work.
Can someone help me?
The question is not quite clear, but if you mean to sort values in a table that may have some complex value, you can do this by using a "custom" search function:
local t = {
{field = 2},
{field = 1},
}
table.sort(t, function(t1, t2)
return t1.field < t2.field
end)
print(t[1].field, t[2].field) -- prints 1, 2
See sorting table by value for related details.
I have a table that is filled with random content that a user enters. I want my users to be able to rapidly search through this table, and one way of facilitating their search is by sorting the table alphabetically. Originally, the table looked something like this:
myTable = {
Zebra = "black and white",
Apple = "I love them!",
Coin = "25cents"
}
I was able to implement a pairsByKeys() function which allowed me to output the tables contents in alphabetical order, but not to store them that way. Because of the way the searching is setup, the table itself needs to be in alphabetical order.
function pairsByKeys (t, f)
local a = {}
for n in pairs(t) do
table.insert(a, n)
end
table.sort(a, f)
local i = 0 -- iterator variable
local iter = function () -- iterator function
i = i + 1
if a[i] == nil then
return nil
else
return a[i], t[a[i]]
end
end
return iter
end
After a time I came to understand (perhaps incorrectly - you tell me) that non-numerically indexed tables cannot be sorted alphabetically. So then I started thinking of ways around that - one way I thought of is sorting the table and then putting each value into a numerically indexed array, something like below:
myTable = {
[1] = { Apple = "I love them!" },
[2] = { Coin = "25cents" },
[3] = { Zebra = "black and white" },
}
In principle, I feel this should work, but for some reason I am having difficulty with it. My table does not appear to be sorting. Here is the function I use, with the above function, to sort the table:
SortFunc = function ()
local newtbl = {}
local t = {}
for title,value in pairsByKeys(myTable) do
newtbl[title] = value
tinsert(t,newtbl[title])
end
myTable = t
end
myTable still does not end up being sorted. Why?
Lua's table can be hybrid. For numerical keys, starting at 1, it uses a vector and for other keys it uses a hash.
For example, {1="foo", 2="bar", 4="hey", my="name"}
1 & 2, will be placed in a vector, 4 & my will be placed in a hashtable. 4 broke the sequence and that's the reason for including it into the hashtable.
For information on how to sort Lua's table take a look here: 19.3 - Sort
Your new table needs consecutive integer keys and needs values themselves to be tables. So you want something on this order:
SortFunc = function (myTable)
local t = {}
for title,value in pairsByKeys(myTable) do
table.insert(t, { title = title, value = value })
end
myTable = t
return myTable
end
This assumes that pairsByKeys does what I think it does...