Need some help with the infrastructure with storing business hours for a location on Parse.com, i already tried it as a separate Class called BusinessHours, where each row has a pointer to the Location class. Having a minimum of 7 rows for each day of the week for 1 location, the objects count comes to +10.000
than in swift i do this to determine if the location is open now
for hour in hours {
if hour.isClosedAllDay {
isOpen = "closed".localized
}else{
let now = NSDate()
if now.hasDayOffset(hour.weekday, closeWeekDay: hour.nextWeekday) {
if hour.open != nil && hour.close != nil {
let open = now.hourDateFromString(hour.open!, offset: now.dayOpenOffset(hour.weekday, closeWeekDay: hour.nextWeekday))
let close = now.hourDateFromString(hour.close!, offset: now.dayCloseOffset(hour.weekday, closeWeekDay: hour.nextWeekday))
if now.isBetween(open, close: close) {
isOpen = "open".localized
timeOfBusiness = hour.time!
break
}
}
}
}
}
Is there a better way to do this than to have thousands of rows for Business Hours only? I was thinking of adding a object field to the Location Class for the hours but don't know if that is the right way to go either.
Depending on how you want to edit and change the details, and the complexities of multiple opening times per day, I'd consider not using multiple columns and rows. Instead, you could simply store a JSON string in a single column which contains all of the required details.
Obviously you wouldn't be able to use this for querying, so if you need to do that then you need to keep something more like your current solution.
If you don't need querying, or you need simple querying like 'is it open at all on a Monday' then a combined solution, supported by cloud code so the app doesn't need lots of knowledge of the JSON, could work well. For instance you could have columns for general open hours each day and then details in JSON, so you can get a rough answer by querying and then check the exact detail before presentation / usage of the result.
I ended up doing it like this in an array field called businessHours in my Location class:
[
{"close":"20:00Z","open":"12:00Z","time":"09:00 - 17:00","isClosedAllDay":false,"nextWeekday":1,"weekday":1},
{"close":"20:00Z","open":"12:00Z","time":"09:00 - 17:00","isClosedAllDay":false,"nextWeekday":2,"weekday":2},
{"close":"20:00Z","open":"12:00Z","time":"09:00 - 17:00","isClosedAllDay":false,"nextWeekday":3,"weekday":3},
{"close":"20:00Z","open":"12:00Z","time":"09:00 - 17:00","isClosedAllDay":false,"nextWeekday":4,"weekday":4},
{"close":"20:00Z","open":"12:00Z","time":"09:00 - 17:00","isClosedAllDay":false,"nextWeekday":5,"weekday":5},
{"close":"20:00Z","open":"12:00Z","time":"09:00 - 17:00","isClosedAllDay":false,"nextWeekday":6,"weekday":6},
{"close":"20:00Z","open":"12:00Z","time":"09:00 - 17:00","isClosedAllDay":false,"nextWeekday":7,"weekday":7}
]
and then looping through the objects as a NSDictionary.
thanks Wain!
Related
I'm using EF Core but I'm not really an expert with it, especially when it comes to details like querying tables in a performant manner...
So what I try to do is simply get the max-value of one column from a table with filtered data.
What I have so far is this:
protected override void ReadExistingDBEntry()
{
using Model.ResultContext db = new();
// Filter Tabledata to the Rows relevant to us. the whole Table may contain 0 rows or millions of them
IQueryable<Measurement> dbMeasuringsExisting = db.Measurements
.Where(meas => meas.MeasuringInstanceGuid == Globals.MeasProgInstance.Guid
&& meas.MachineId == DBMatchingItem.Id);
if (dbMeasuringsExisting.Any())
{
// the max value we're interested in. Still dbMeasuringsExisting could contain millions of rows
iMaxMessID = dbMeasuringsExisting.Max(meas => meas.MessID);
}
}
The equivalent SQL to what I want would be something like this.
select max(MessID)
from Measurement
where MeasuringInstanceGuid = Globals.MeasProgInstance.Guid
and MachineId = DBMatchingItem.Id;
While the above code works (it returns the correct value), I think it has a performance issue when the database table is getting larger, because the max filtering is done at the client-side after all rows are transferred, or am I wrong here?
How to do it better? I want the database server to filter my data. Of course I don't want any SQL script ;-)
This can be addressed by typing the return as nullable so that you do not get a returned error and then applying a default value for the int. Alternatively, you can just assign it to a nullable int. Note, the assumption here of an integer return type of the ID. The same principal would apply to a Guid as well.
int MaxMessID = dbMeasuringsExisting.Max(p => (int?)p.MessID) ?? 0;
There is no need for the Any() statement as that causes an additional trip to the database which is not desirable in this case.
If I have a couple docs in Couch that look like this:
{
"_id": "be890e3ee1457e920f12722c44001b0e", // Or whatever auto ID
"_rev": "7-74d1787aa3ca6d2526c4436577da660f", // Or whatever auto rev
"type_": "count",
"value": -1,
"time": 1485759832925 // This is an Epoch time, the result of this JavaScript: var x = (new Date()).getTime(), that I calculate in the console just before saving the doc
}
And then I create a map function to retrieve these docs like so (that I run directly after creating a few docs):
function(doc) {
if (doc.type_) {
if (doc.time) {
var datetime = (new Date()).getTime();
var docTime = doc.time;
var docAge = datetime - docTime;
// Only emit docs younger than 1 minute
if (docAge / 1000 <= 60) {
emit(doc.time, docAge);
};
};
};
};
I found that once the view is calculated, that the docAge will never change and that the docs will always be emitted despite being 'too old'.
If you open a doc and re-save it, then the view will NOT emit that doc (because it reflects as a CouchDB update and now the time value is too old), but other docs will not have been recalculated (i.e. the docAge for those docs is still the same).
So by this I can see that views are incrementally updated to reflect changed docs. And as I understand, they are cached.
Question:
Where are these cached views stored?
Are Group and reduce output recalculated from scratch everytime the map
function incrementally updates?
Your views are not being "cached" per-se. The idea behind CouchDB views is that they are deterministic, and thus should not be influenced by anything beyond the document in question.
Using new Date() in your view means that you are bringing in an external resource (the clock) which means your view index will be computed in a way you aren't intending based on your question.
Your map function must deal in absolutes, so it should output the timestamp irregardless of the time that your view index is rebuilt. From your application, you'll pass the time you want to query as a parameter to the view query.
For example, consider this view function:
function (doc) {
if (doc.type_ && doc.time) {
emit(doc.time);
}
}
It will output the time for all your documents. Then, you will query the view passing in the expected timeframe.
?start_key=<timestamp from 1 minute ago>
Then you will get the documents whose timestamp falls in the last minute. You can include end_key to specify an upper-limit.
There's a bit of a mental hurdle to overcome with how MapReduce views in CouchDB are designed to work, so I would highly recommend their Guide to Views to get started. (in fact, their newest documentation is quite good and I would highly recommend reading through all of it)
I'm working on my first laravel project: a family tree. I have 4 branches of the family, each with people/families/images/stories/etc. A given user on the website will have access to everything for 1, 2, or 4 of these branches of the family (I don't want to show a cousin stuff for people they're not related to).
So on various pages I want the collections from the controller to contain stuff based on the given user's permissions. Merge seems like the right way to do this.
I have scopes to get people from each branch of the family, and in the following example I also have a scope for people with a birthday this month. In order to show the right set of birthdays for this user, I can get this by merging each group individually if they have access.
Here's what my function would look like if I showed everyone in all 4 family branches:
public function get_birthday_people()
{
$user = \Auth::user();
$jones_birthdays = Person::birthdays()->jones()->get();
$smith_birthdays = Person::birthdays()->smith()->get();
$lee_birthdays = Person::birthdays()->lee()->get();
$brandt_birthdays = Person::birthdays()->brandt()->get();
$birthday_people = $jones_birthdays
->merge($smith_birthdays)
->merge($lee_birthdays )
->merge($brandt_birthdays );
return $birthday_people;
My challenge: I'd like to modify it so that I check the user's access and only add each group of people accordingly. I'm imagining something where it's all the same as above except I add conditionals like this:
if($user->jones_access) {
$jones_birthdays = Person::birthdays()->jones()->get();
}
else{
$jones_birthdays =NULL;
}
But that throws an error for users without access because I can't call merge on NULL (or an empty array, or the other versions of 'nothing' that I tried).
What's a good way to do something like this?
if($user->jones_access) {
$jones_birthdays = Person::birthdays()->jones()->get();
}
else{
$jones_birthdays = new Collection;
}
Better yet, do the merge in the condition, no else required.
$birthday_people = new Collection;
if($user->jones_access) {
$birthday_people->merge(Person::birthdays()->jones()->get());
}
You are going to want your Eloquent query to only return the relevant data for the user requesting it. It doesn't make sense to query Lee birthdays when a Jones person is accessing that page.
So what you will wind up doing is something like
$birthdays = App\Person::where('family', $user->family)->get();
This pulls in Persons where their family property is equal to the family of the current user.
This probably does not match the way you have your relationships right now, but hopefully it will get you on the right track to getting them sorted out.
If you really want to go ahead with a bunch of queries and checking for authorization, read up on the authorization features of Laravel. It will give let you assign abilities to users and check them easily.
Why isn't the exception triggered? Linq's "Any()" is not considering the new entries?
MyContext db = new MyContext();
foreach (string email in {"asdf#gmail.com", "asdf#gmail.com"})
{
Person person = new Person();
person.Email = email;
if (db.Persons.Any(p => p.Email.Equals(email))
{
throw new Exception("Email already used!");
}
db.Persons.Add(person);
}
db.SaveChanges()
Shouldn't the exception be triggered on the second iteration?
The previous code is adapted for the question, but the real scenario is the following:
I receive an excel of persons and I iterate over it adding every row as a person to db.Persons, checking their emails aren't already used in the db. The problem is when there are repeated emails in the worksheet itself (two rows with the same email)
Yes - queries (by design) are only computed against the data source. If you want to query in-memory items you can also query the Local store:
if (db.Persons.Any(p => p.Email.Equals(email) ||
db.Persons.Local.Any(p => p.Email.Equals(email) )
However - since YOU are in control of what's added to the store wouldn't it make sense to check for duplicates in your code instead of in EF? Or is this just a contrived example?
Also, throwing an exception for an already existing item seems like a poor design as well - exceptions can be expensive, and if the client does not know to catch them (and in this case compare the message of the exception) they can cause the entire program to terminate unexpectedly.
A call to db.Persons will always trigger a database query, but those new Persons are not yet persisted to the database.
I imagine if you look at the data in debug, you'll see that the new person isn't there on the second iteration. If you were to set MyContext db = new MyContext() again, it would be, but you wouldn't do that in a real situation.
What is the actual use case you need to solve? This example doesn't seem like it would happen in a real situation.
If you're comparing against the db, your code should work. If you need to prevent dups being entered, it should happen elsewhere - on the client or checking the C# collection before you start writing it to the db.
I have a simple document named Order structure with the fields id, name,
userId and timeScheduled.
What I would like to do is create a view where I can find the
document.id for those who's userId is some value and timeScheduledis
after a given date.
My view:
"by_users_after_time": {
"map": "function(doc) { if (doc.userId && doc.timeScheduled) {
emit([doc.timeScheduled, doc.userId], doc._id); }}"
}
If I do
localhost:5984/orders/_design/Order/_view/by_users_after_time?startKey="[2012-01-01T11:40:52.280Z,f98ba9a518650a6c15c566fc6f00c157]"
I get every result back. Is there a way to access key[1] to do an if
doc.userId == key[1] or something along those lines and simply emit on the
time?
This would be the SQL equivalent of
select id from Order where userId =
"f98ba9a518650a6c15c566fc6f00c157" and timeScheduled >
2012-01-01T11:40:52.280Z;
I did quite a few Google searches but I can't seem to find a good tutorial
on working with multiple keys. It's also possible that my approach is
entirely flawed so any guidance would be appreciated.
You only need to reverse the key, because username is known:
function (doc) {
if (doc.userId && doc.timeScheduled) {
emit([doc.userId, doc.timeScheduled], 1);
}
}
Then query with:
?startkey=["f98ba9a518650a6c15c566fc6f00c157","2012-01-01T11:40:52.280Z"]
NOTES:
the query parameter is startkey, not startKey;
the value of startkey is an array, not a string. Then the double quotes go around the username and date values, not around the array.
I emit 1 as value, instead of doc._id, to save disk-space. Every row of the result has an id field with the doc._id, then there's no need to repeat it.
don't forget to set an endkey=["f98ba9a518650a6c15c566fc6f00c157",{}], otherwise you get the data of all users > "f98ba9a518650a6c15c566fc6f00c157"
The answer actually came from the couchdb mailing list:
Essentially, the Date.parse() doesn't like the +0000 on the timestamps. By
doing a substring and removing the +0000, everything worked.
For the record,
document.write(new Date("2012-02-13T16:18:19.565+0000")); //Outputs Invalid
Date
document.write(Date.parse("2012-02-13T16:18:19.565+0000")); //Outputs NaN
But if you remove the +0000, both lines of code work perfectly.