So I have a form and a table called Variables. The table is simply the fields VarID, VarDescription, and VarValue. It's only three items that are all network locations of things. The VarValue is the only thing that can be changed through the form, and thus it is the only thing validated. I validate those records on the form with the Before Update Event in the control by using logic such as:
If Me.VarID = 1 Then
If Me.Tex like "*:\* Then....
End If
If GetAttr(Me.Tex) = vbDirectory Then
End
End If
If Me.VarID = 2 Then
If Me.Tex like "*:\* Then....
End If
If GetAttr(Me.Tex) = vbNormal Or GetAttr(Me.Tex) = vbArchive Then
End If
End If
This all works great. However my issue comes in when multiple locations become invalid at once. I get stuck in one cell because the other VarValue's are invalid as well. How can I validate only the cell I have changed? I tried playing around with various Dirty and Focus events/methods, but those seem to be form specific, not cell specific.
This seems to just be a reference issue as I am looking on the database on two completely separate networks. The original uses all Office 16.0 libraries whereas the instance I was having the issue with was using Office 15.0 libraries. I played around with the original and it works fine, even if all the locations become bad at the same time.
Hey guys I'm having the WEIRDEST issue while pairing on this rails app. We're using google maps api to grab lat and lng to do an ActiveRecord look up. For some odd reason the SomeModel.where("lat <=?", neLat) will not work BUT when I print neLat to the console and then copy the number and then hard code it to that query it works perfectly! I'm so stumped. Has anyone had any issues like this before?
This is an example of what I'm currently doing which doesn't work:
data = request.parameters
neLat = data['neLat'].to_f
#props = SomeModel.where("lat <=?", neLat)
47.6090933689332 is the number I see in my console/ActiveRecord query. One troubleshooting step I've taken was hard coding the number into a variable like this:
neLat = 47.6090933689332
#props = SomeModel.where("lat <=?", neLat)
I get the results I'm looking for this way, but I don't want to be hardcoding these coordinates. Another troubleshooting step I've taken was to make sure there wasn't a conflict with the float value so I went with this:
temp = "47.6090933689332"
neLat = temp.to_f
#props = SomeModel.where("lat <=?", neLat)
This didn't work either as it gave me the same results as the example just above.
EDIT:
I've provided more details as requested in the comments. Also, I realized I wasn't as clear as I could've been in my initial question. When I query I'm able to get my desired results back in the console, but the problem lies in the rendering of the results in my view. Any form of hardcoding will render correctly in my view, but when I use the values from my request.parameters is where the rendering fails. It will either render everything in the table or nothing in the table. So in the example below I can see that I'm getting 3 different properties from my properties table, but my view will render EVERYTHING in my table, which is like 7 properties.
Here is the raw sql query that ActiveRecord generates (this is the result of all 3 ways as I've done above):
SELECT `properties`.* FROM `properties` WHERE (lat >= 47.609093368933195)
#<Property:0x007fa333ac1880>
#<Property:0x007fa333ac1718>
#<Property:0x007fa333ac15b0>
As for the schema, it's just one table that I'm using and no joins.
http://kb.mailchimp.com/api/resources/lists/members/lists-members-collection
Using this resource we can obtain only first 10 members. How to get all?
The answer is quite simple - use offset and count parameters in URL query:
https://us10.api.mailchimp.com/3.0/lists/b5b5fdc2fa/members?offset=150&count=10
Finally I found PHP API client for MailChimp API v3:
https://github.com/pacely/mailchimp-api-v3
And official docs about pagination.. I missed it before :(
http://kb.mailchimp.com/api/article/api-3-overview
I stumbled on this one while researching a way to get all list members in MC API 3.0 as well. I noticed that there were some comments on the API timing out when trying to get all list members on one page. I also encountered this at first but was able to overcome it by limiting the fields in the result by using the 'fields' param. My code is for a mass deleter so all I really needed was the ID of each member to put together a batch delete request. Here's how my fetch request looks (psuedo-code):
$total_members = $result['total_items'];//get number of members in list via previous request
https://usXX.api.mailchimp.com/3.0/lists/foobarx/members?fields=members.id&count=total_members
This way I'm able to fetch over 15,000 subscribers on one page without error.
offset and count is the official way on the docs, but the problem is that has linear slowdown. It appears to be an n^2 solution, so if you have 20,000 items, you're in trouble. Their docs http://developer.mailchimp.com/documentation/mailchimp/reference/lists/members/#read-get_lists_list_id_members warn you against using offset.
If you're scenario permits you to use other filters (like since_last_changed), then you can do it quickly. See What is the right syntax for "timeframe" in MailChimp API 3.0 for format for datetime.
Using offset and count parameters are correct as mentioned in some of the other answers, but becomes tedious for large lists.
A more efficient way, is to use a client for the MailChimp API. I used mailchimp3 for python. Using this, it's pretty easy to get all members on your list because it handles the pagination. Here's how you would do it.
from mailchimp3 import MailChimp
client = MailChimp('YOUR_USERNAME', 'YOUR_SECRET_KEY')
client.lists.members.all('YOUR_LIST_ID', get_all=True, fields="members.email_address")
You can do it just with count, making an API call to the list root so in the next API call you include the count parameter and you have all your list members.
I ran into issues with this because I had a moderate list with 2600 members and MailChimp was throwing an error, but it worked with 1500 people.
So for a list bigger than 1500 members I use MailChimp export API bare in mind that this is going to get discontinued but I could not find any other acceptable solutions.
Alternatively for bigger lists (>1500) you could get the total of members and then make multiple api calls to the Member endpoint but I really dislike that :(
If anyone has a better alternative I would be really glad to hear it.
With MailChimp.Net.
Use the offset value.
List<Member> listMembers = new List<Member>();
IMailChimpManager manager = new MailChimpManager(MailChimpApiKey);
bool moreAvailable = true;
int offset = 0;
while (moreAvailable)
{
var listMembers = manager.Members.GetAllAsync(yourListId, new MemberRequest
{
Status = Status.Subscribed,
Limit = 250,
Offset = offset
}).ConfigureAwait(false);
var Allmembers = listMembers.GetAwaiter().GetResult();
foreach(Member member in Allmembers)
{
listMembers.Add(member);
}
if (Allmembers.Count() == 250)
//if the count is < of 250 then it means that there aren't more results
offset += 250;
else
moreAvailable = false;
}
Ist there a way to set the maximal cache time for a USER object? (not sure if its really called object..)
The only thing i found was COA_GO - which is COA with user defined cache time - but the update to the latest revision is about two years old, which makes me hope that there is a similar core feature which made it obsolete...
/optimism off
if its not possible at all an example of how to leverage Typo3's internal cache would also solve most of my problems.
just had a look at class.t3lib_cache_manager.php, and... i don't really get it... was expecting something similar to apc...
Thanks in advance for any hint or suggestion!
Take a look at the new cObj Cache (Forge) and the blog article explaining how it works.
It basically registers a new stdWrap property which can contain a lifetime:
5 = TEXT
5 {
cache.key = mycurrenttimestamp
cache.tags = tag_a,tag_b,tag_c
cache.lifetime = 3600
data = date : U
strftime = %H:%M:%S
}
I develop the system where I assume will be many users. Each user has a profile represented inside the application as a record. To store user's profile I do the following base64:encode_to_string(term_to_binary(Profile)), so basically profiles stored in serialized maner.
So far everything is just fine. Now comes the question:
From time to time I do plan to extend profile functionality by adding and removing certain fields in it. My question is what is a best strategy to handle these changes in the code?
The approach I see at the moment is to do something like this:
Profile = get_profile(UserName),
case is_record(Profile, #profile1) of
true ->
% do stuff with Profile#profile1
ok;
_ ->
next
end,
case is_record(Profile, #profile2) of
true ->
% do stuff with Profile#profile2
ok;
_ ->
next
end,
I want to know if there are any better solutions for my task?
Additional info: I use is simple KV storage. It cannot store Erlang types this is why I use State#state.player#player.chips#chips.br
Perhaps, you could use proplists.
Assume, you have stored some user profile.
User = [{name,"John"},{surname,"Dow"}].
store_profile(User).
Then, after a couple of years you decided to extend user profile with user's age.
User = [{name,"John"},{surname,"Dow"},{age,23}].
store_profile(User).
Now you need to get a user profile from DB
get_val(Key,Profile) ->
V = lists:keyfind(Key,1,Profile),
case V of
{_,Val} -> Val;
_ -> undefined
end.
User = get_profile().
UserName = get_val(name,User).
UserAge = get_val(age,User).
If you get a user profile of 'version 2', you will get an actual age (23 in this particular case).
If you get a user profile of 'version 1' ('old' one), you will get 'undefined' as an age, - and then you can update the profile and store it with the new value, so it will be 'new version' entity.
So, no version conflict.
Probably, this is not the best way to do, but it might be a solution in some case.
It strongly depend of proportion of number of records, frequency of changes and acceptable outage. I would prefer upgrade of profiles to newest version first due maintainability. You also can make system which will upgrade on-fly as mnesia does. And finally there is possibility keep code for all versions which I would definitely not prefer. It is maintenance nightmare.
Anyway when is_record/2 is allowed in guards I would prefer
case Profile of
X when is_record(X, profile1) ->
% do stuff with Profile#profile1
ok;
X when is_record(X, profile2) ->
% do stuff with Profile#profile2
ok
end
Notice there is not catch all clause because what you would do with unknown record type? It is error so fail fast!
You have many other options e.g. hack like:
case element(1,Profile) of
profile1 ->
% do stuff with Profile#profile1
ok;
profile2 ->
% do stuff with Profile#profile2
ok
end
or something like
{_, F} = lists:keyfind({element(1,Profile), size(Profile)},
[{{profile1, record_info(size, profile1)}, fun foo:bar/1},
{{profile2, record_info(size, profile2)}, fun foo:baz/1}]),
F(Profile).
and many other possibilities.
The best approach is to have the copy of the serialized (profile) and also a copy of the same but in record form. Then , each time changes are made to the record-form profile, changes are also made to the serialized profile of the same user ATOMICALLY (within the same transaction!). The code that modifies the users record profile, should always recompute the new serialized form which, to you, is the external representation of the users record-record(record_prof,{name,age,sex}).
-record(myuser,{
username,
record_profile = #record_prof{},
serialized_profile
}).
change_profile(Username,age,NewValue)->
%% transaction starts here....
[MyUser] = mnesia:read({myuser,Username}),
Rec = MyUser#myuser.record_profile,
NewRec = Rec#record_prof{age = NewValue},
NewSerialised = serialise_profile(NewRec),
NewUser = MyUser#myuser{
record_profile = NewRec,
serialized_profile = NewSerialised
},
write_back(NewUser),
%% transaction ends here.....
ok.
So whatever the serialize function is doing, that's that. But this always leaves an overhead free profile change. We thereby keep the serialized profile as always the correct representation of the record profile at all times. When changes occur to the record profile, the serialized form must also be recomputed (transactional) so as to have integrity.
You could use some extensible data serialization format such as JSON or Google Protocol Buffers.
Both of these formats support adding new fields without breaking backwards compatibility. By using them you won't need to introduce explicit versioning to your serialized data structures.
Choosing between the two formats depends on your use case. For instance, using Protocol Buffers is more reliable, whereas JSON is easier to get started with.