Appengine Projection Query Costs Not Consistent With Docs - performance

I have a test handler with the following:
Model.query().get( projection = [Model.name._name] )
Appstats shows me the following:
(1) 2013-09-29 21:46:38.638 "GET /test" 200 real=2585ms api=0ms overhead=2ms (1 RPC, cost=140, billed_ops=[DATASTORE_READ:2])
According to https://developers.google.com/appengine/docs/billing?csw=1#cost_resource, it should be using 1 read + 1 small for the projected query. Why is it telling me 2 reads (keys_only does use only 1 small)? Also, why is each read a cost of 70 when the docs say 60?
This occurs on both development and production servers.
EDIT: the Model class used is from ndb

What's Model.name._name? Try using Model.name instead.

Related

Jmeter - Break down huge JDBC query into multiple JDBC queries

I want to perform jdbc query which returns 700K+ rows, then build my logic based on that, but when try to execute response time is too long, so i need to break down on multiple queries, each returning ex:1000 results.
My architecture is like:
001_1 JDBC max_value - Defines the maximum value, of that 700K, which is increasing all the time.
001_1 JDBC MAIN - This JDBC i want to break down into multiple JDBC requests.
InIt counter = vars.put("counter","1");
offset_value - counter element
While controller - ${__javaScript(parseInt(vars.get("counter"))<=700)}
If i put hard-code value into the While controller, everything works fine, and my script is working; But when data-base increase the records size, then i need manually to amend the number of the while controller 700, so can cover the next records.
Based on my understanding here i have 3 variables:
max_value = 714K
counter = 1
offset_value = 0
If i try: ${__javaScript(parseInt(vars.get("offset_value")<=parseInt(vars.get("max_value")))== true)}
as a while controller statement, offset_value is still not evaluated, and while controller is not working properly.
How can i compare offset_value vs. max_value, so i can drive my While controller?
Any help is appreciated!
If your parseInt(vars.get("offset_value")) expression is being executed against not-initialised variable it will return NaN so comparing it with a number doesn't make a lot of sense, you need to amend it to something like parseInt(vars.get("offset_value")) || 0 so it would return zero on first iteration.
Also be aware that starting from JMeter 3.1 you should be using JSR223 Test Elements for scripting and correspondingly __groovy() function in the While Controller. More information: Apache Groovy - Why and How You Should Use It
Thanks for the help #Dmitri T; However solution for me was:
1. Initialize the compare value as: JRS233 sample -> vars.put("offset_value","0");
2. ${__javaScript(parseInt(vars.get("offset_value"))<=parseInt(vars.get("max_value_1")),)} -> inside while controller.
3. Counter: Track counter & Reset counter: must me checked both

How to initialize PYFMI models in parallel?

I am using pyfmi to do simulations with EnergyPlus. I recognized that initializing the individual EnergyPlus models takes quite some time. Therefore, I hope to find a way to initialize the models in parallel. I tried the python library multiprocessing with no success. If it matters, I am on Ubuntu 16.10 and use Python 3.6.
Here is what I want to get done in serial:
fmus = {}
for id in id_list:
chdir(fmu_path+str(id))
fmus[id] = load_fmu('f_' + str(id)+'.fmu',fmu_path+str(id))
fmus[id].initialize(start_time,final_time)
The result is a dictionary with ids as key and the models as value: {id1:FMUModelCS1,id2:FMUModelCS1}
The purpose is to call later the models by their key and do simulations.
Here is my attempt with multiprocessing:
def ep_intialization(id,start_time,final_time):
chdir(fmu_path+str(id))
model = load_fmu('f_' + str(id)+'.fmu',fmu_path+str(id))
model.initialize(start_time,final_time)
return {id:model}
data = ((id,start_time,final_time) for id in id_list)
if __name__ == '__main__':
pool = Pool(processes=cpus)
pool.starmap(ep_intialization, data)
pool.close()
pool.join()
I can see the processes of the models in my system monitor but then the script raise an error because the models are not pickable:
MaybeEncodingError: Error sending result: '[{id2: <pyfmi.fmi.FMUModelCS1 object at 0x561eaf851188>}]'. Reason: 'TypeError('self._fmu,self.callBackFunctions,self.callbacks,self.context,self.variable_list cannot be converted to a Python object for pickling',)'
But I cannot imagine that there is no way to initialize the models in parallel. Other frameworks/libraries than threading/multiprocessing are also welcome.
I saw this answer but it seems that it focuses on the simulations after initialization.
The answer below the one you refer to seems to explain what the problem with multiprocessing and FMU instantiation is.
I tried with pathos suggested in this answer, but run into the same problem:
from pyfmi import load_fmu
from multiprocessing import Pool
from os import chdir
from pathos.multiprocessing import Pool
def ep_intialization(id):
chdir('folder' + str(id))
model = load_fmu('BouncingBall.fmu')
model.initialize(0,10)
return {id:model}
id_list = [1,2]
cpus = 2
data = ((id) for id in id_list)
pool = Pool(cpus)
out = pool.map(ep_intialization, data)
This gives:
MaybeEncodingError: Error sending result: '[{1: <pyfmi.fmi.FMUModelME2 object at 0x564e0c529290>}]'. Reason: 'TypeError('self._context,self._fmu,self.callBackFunctions,self.callbacks cannot be converted to a Python object for pickling',)'
Here is another idea:
I suppose the instantiation is slow because EnergyPlus links plenty of libraries into the FMU. If the components you are modelling all have the same interface (input, output, parameters), you can probably use a single FMU with an additional parameter that switches between the models.
This would be much more efficient: You would only have to instantiate a single FMU and could call it in parallel with different parameters and inputs.
Example:
I have never worked with EnergyPlus, but maybe the following example will illustrate the approach:
You have three variants of a building and you are merely interested in the total heat flux over the entire surface area of buildings as a function of - "weather" (whatever that means - maybe a lot of variables).
Put all three buildings into a single EnergyPlus model and build an if or case clause around them (pseudo code):
if (id_building == 1) {
[model the building one]
elseif (if_building == 2) {
[model the building two]
[...]
Define the "weather" or whatever you need as an input variable for the FMU and define id_building also as a parameter. Define the overall heat flux as output variable.
This would allow you to choose the building before starting the simulation.
The two requirements are:
EnergyPlus Syntax allows if or case structures.
All your models work with the same interface (in our example we have weather as in and a flux as out variables)
There is a dirty workaround for the second requirement: Just define all the variables all your models need and only use what you need in the respective if block.

how can I get ALL records from route53?

how can I get ALL records from route53?
referring code snippet here, which seemed to work for someone, however not clear to me: https://github.com/aws/aws-sdk-ruby/issues/620
Trying to get all (I have about ~7000 records) via resource record sets but can't seem to get the pagination to work with list_resource_record_sets. Here's what I have:
route53 = Aws::Route53::Client.new
response = route53.list_resource_record_sets({
start_record_name: fqdn(name),
start_record_type: type,
max_items: 100, # fyi - aws api maximum is 100 so we'll need to page
})
response.last_page?
response = response.next_page until response.last_page?
I verified I'm hooked into right region, I see the record I'm trying to get (so I can delete later) in aws console, but can't seem to get it through the api. I used this: https://github.com/aws/aws-sdk-ruby/issues/620 as a starting point.
Any ideas on what I'm doing wrong? Or is there an easier way, perhaps another method in the api I'm not finding, for me to get just the record I need given the hosted_zone_id, type and name?
The issue you linked is for the Ruby AWS SDK v2, but the latest is v3. It also looks like things may have changed around a bit since 2014, as I'm not seeing the #next_page or #last_page? methods in the v2 API or the v3 API.
Consider using the #next_record_name and #next_record_type from the response when #is_truncated is true. That's more consistent with how other paginations work in the Ruby AWS SDK, such as with DynamoDB scans for example.
Something like the following should work (though I don't have an AWS account with records to test it out):
route53 = Aws::Route53::Client.new
hosted_zone = ? # Required field according to the API docs
next_name = fqdn(name)
next_type = type
loop do
response = route53.list_resource_record_sets(
hosted_zone_id: hosted_zone,
start_record_name: next_name,
start_record_type: next_type,
max_items: 100, # fyi - aws api maximum is 100 so we'll need to page
)
records = response.resource_record_sets
# Break here if you find the record you want
# Also break if we've run out of pages
break unless response.is_truncated
next_name = response.next_record_name
next_type = response.next_record_type
end

rspec testing that api call increments a counter

I have a test that works right now but it's ugly and I can't help thinking there is a better way to do this. Basically I pick a record from the database and then make an api call which should affect that record. However the only way to make the test pass is to pull the record from the database a second time.
it "counts how many times a client has pulled its config" do
client = Endpoint.last
config_count = client.config_count
post '/api/config', node_key: client.node_key
same_client = Endpoint.find_by node_key: client.node_key
# expect(client.config_count).to eq(config_count + 1)
expect(same_client.config_count).to eq(config_count + 1)
end
The commented out line does not work. This fix is so ugly that it makes me think I'm doing it wrong. I also tried this:
expect {post '/api/config', node_key: client.node_key}.to change {client.config_count}.by(1)
So what is the proper way to test this?
Probably several ways to solve it. I tend to call .reload on my object if I want updated values for it and don't care what exactly is happening inside the object itself.
it "counts how many times a client has pulled its config" do
client = Endpoint.last
config_count = client.config_count
post '/api/config', node_key: client.node_key
client.reload
expect(client.config_count).to eq(config_count + 1)
end

Get Garbage Collector metrics using WMI

I need to collect metrics about the Percent Time in GarbageCollector using WMI Classes for Windows servers. I'm using the Class: "Win32_PerfRawData_NETFramework_NETCLRMemory".
Is this correct?
Then I take two samples for that class and i made the following calculation:
# PSEUDO CODE
PercentTime in GC =
(
(sample2->'PercentTimeinGC' - sample1->'PercentTimeinGC') /
(sample2->'TimeStamp_Sys100NS' - sample1->'TimeStamp_Sys100NS')
)
This calculation is definitively wrong, how to do it in the right way?
Tks in advance.
gulden
After some digging in the unknown world of windows I've found the solution:
I've started with this link that explains the calculation methods for each kind of metric:
http://msdn.microsoft.com/en-us/library/ms974615.aspx
However, we need to know the countertype, in this case the countertype for "PercentTimeinGC". To know that i need to run the WEBMTest.exe program:
http://technet.microsoft.com/en-us/library/cc180684.aspx
Connect to "root\CIMV2"
Open Class... "Win32_PerfRawData_NETFramework_NETCLRMemory"
Select the property "PercentTimeinGC"
Click in the button "Show MOF"
Find the line:
"[DisplayName("% Time in GC"): ToInstance, countertype(537003008): ToInstance, perfindex(2606): ToInstance, helpindex(2607): ToInstance, defaultscale(0): ToInstance, perfdetail(100): ToInstance] uint32 PercentTimeinGC;"
Now that we know the countertype (537003008), you need to map it to a human readable form. This link will help:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa389383(v=vs.85).aspx
The mapping for coutertype 537003008 is PERF_RAW_FRACTION.
We go back for the first link and find the calculation method for PERF_RAW_FRACTION that is:
(100 * CounterValue) / BaseValue
I love windows.
gulden

Resources