Factory Girl: Use same object several times - ruby

I'm building a factory method that will to save some information in JSON for later analysis in an app that cuts boards.
These boards will have plating around them, so the JSON will contain the ids for each one of its boarders.
For example, we can have Platings 1 and 2.
And I need to generate boards that may look like this:
width: 500
height: 500
platings: {"up": 1, left: 2, right: 1}
So I can have more than once an id on a single Factory, but I have no way to do it since every time I create a plating I will get another ID.
This is an example of what I have tried to do
Factory :medium_board do
width 500
height 500
platings {{
"up": create(:plating).id, #id: 1 OK
"left": create(:another_plating).uuid, #id: 2, OK
"right": create(:plating).uuid #id: 3, NOT OK, should have been 1.
}}
end
Is there something fundamental that I am skipping?
Which is the best way to do this?
Best

I would use an after hook here. Something like this:
factory :medium_board do
after :build do |board|
plating1 = create(:plating)
plating2 = create(:another_plating)
board.platings = {
up: plating1.id,
left: plating2.id,
right: plating1.id,
}
end
end

Related

cURL or wget for measuring page load time?

What's the best/most accurate way to measure a pages load time that would include the time to load all of the pages resources? (Basically trying to get a load time that a real end user might have).
Is it better to use Wget or cURL for this type of task? (The operating system in use will be Windows due to other dependencies)
You can download all the resources requested by a page with wget, using the -p option:
wget -p https://www.example.com/
curl doesn't parse HTML so it can't be used for this. It will just print the initial page's HTML but if the HTML requests images or CSS or JS files, it won't know about that (because it doesn't parse HTML) so it won't download any of them.
wget won't be a very accurate measure of the page's load time if the page requests resources through JavaScript, because wget doesn't parse or execute JavaScript. A more accurate way to get user perceived load times is to open the page in Chrome and then look at how long it took. You can see an accurate breakdown by looking at the Network tab in the Dev Tools. If you're trying to automate it, you can use Chrome Headless through Puppeteer, something like this:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
// Get the first tab
const page = (await browser.pages())[0];
await page.goto('https://example.com/');
const loadTime = page.evaluate(() => window.performance.timing.loadEventEnd - window.performance.timing.navigationStart);
console.log(loadTime);
await browser.close();
})();
This on its own will also not be accurate due to caching of DNS results, the TLS handshake or resources by the browser.
UPDATE
If you only get the page load time, you cannot accomplish what you want to do. At a minimum, you need "Time to First Byte" to eliminate Server delays from performance issues.
WebPageTest has a LOT of data. But what I use is minimal. The below example, the JSON file has 162,260 lines of JSON (5.5 MB), I use 125 lines (4 KB) of it.
To show you how easy it is, I wrote this PHP app in a couple of hours.
I entered a URL at https://www.webpagetest.org
I copied the "Download JSON" link
Pasted it into my PHP code
The code,
Retrieved the JSON
Got a sub string of the JSON I needed.
Then formatted the data
And made a table
I just used a few parameters that would be needed to assess performance.
I included the Number of DOM elements and JS & CSS Blocking because those are very common issues with many pages, especially WordPress pages.
I find the layout shift to be important. Layout shift is when after the first paint, JS makes a change that requires the Browser to do another rendering.
<?php
header("Content-Type: text/html; UTF-8");
$data = file_get_contents('https://www.webpagetest.org/jsonResult.php?test=<result id>&pretty=1');
$start = strpos($data,'chromeUserTiming.CumulativeLayoutShift": ') + 41;
$LayoutShift = substr($data,$start,6);
$start = strpos($data,'"firstView": {') + 13;
$end = strpos($data,'}',$start) + 1;
$json = substr($data,$start,$end - $start);
unset($data);
$json = json_decode($json,1);
$TTFB = number_format($json['TTFB'] / 1000,3);
$loadTime = number_format($json['loadTime'] / 1000,3);
$fullyLoaded = number_format($json['fullyLoaded'] / 1000,3);
$loadEventStart = number_format($json['loadEventStart'] / 1000,3);
$firstPaint = number_format($json['firstPaint'] / 1000,3);
$firstContentfulPaint = number_format($json['firstContentfulPaint'] / 1000,3);
$renderBlockingCSS = number_format($json['renderBlockingCSS'] / 1000,3);
$renderBlockingJS = number_format($json['renderBlockingJS'] / 1000,3);
$TotalBlockingTime = number_format($json['TotalBlockingTime'] / 1000,3);
$FirstInteractive = number_format($json['FirstInteractive'] / 1000,3);
$fullyLoaded = number_format($json['fullyLoaded'] / 1000,3);
$TotalBlockingTime = number_format($json['TotalBlockingTime'] / 1000,3);
$renderBlockingCSS = number_format($json['renderBlockingCSS'] / 1000,3);
$renderBlockingJS = number_format($json['renderBlockingJS'] / 1000,3);
$requests = $json['requests'] ;
$domElements = $json['domElements'] ;
$domComplete = number_format($json['domComplete'] / 1000,3);
$LayoutShift = number_format($json['chromeUserTiming.TotalLayoutShift'] / 1000,3);
echo <<<EOT
<table>
<tr><td>First Byte</td><td>$TTFB</td></tr>
<tr><td>DOM Complete</td><td>$domComplete</td></tr>
<tr><td>Load Time</td><td>$loadTime</td></tr>
<tr><td>First Paint</td><td>$firstPaint </td></tr>
<tr><td>Fully Loaded</td><td>$fullyLoaded</td></tr>
<tr><td></td> <td></td> </tr>
<tr><td></td> <td></td> </tr>
<tr><td>Requests</td><td>$requests</td></tr>
<tr><td>DOM Elements</td><td>$domElements</td></tr>
<tr><td>CSS Render Blocking</td><td>$renderBlockingCSS</td></tr>
<tr><td>JS Render Blocking</td><td>$renderBlockingJS</td></tr>
<tr><td>Total Blocking Time</td><td>$TotalBlockingTime </td></tr>
<tr><td>Layout Shift</td><td>$LayoutShift </td></tr>
</table
EOT;
?>
The results
These results are for a mobile page with lots of pictures.
For mobile I like to have as few requests as possible.
I embed the images in the page as base64.
This is the sub string of the JSON I actually use:
{
"loadTime": 580.33333333333337,
"docTime": 580.33333333333337,
"fullyLoaded": 640.33333333333337,
"bytesOut": 3480,
"bytesOutDoc": 3114.6666666666665,
"bytesIn": 117909,
"bytesInDoc": 117862.66666666667,
"requests": 3,
"requestsFull": 3,
"requestsDoc": 2.6666666666666665,
"responses_200": 3,
"responses_404": 0,
"responses_other": 0,
"result": 0,
"testStartOffset": 0,
"cached": 0,
"optimization_checked": 1,
"loadEventStart": 575.33333333333337,
"loadEventEnd": 575.66666666666663,
"domContentLoadedEventStart": 510.33333333333331,
"domContentLoadedEventEnd": 510.33333333333331,
"connections": 1,
"final_base_page_request": 0,
"firstPaint": 304.63333333283333,
"firstContentfulPaint": 309.66666666666669,
"firstImagePaint": 309.66666666666669,
"firstMeaningfulPaint": 309.66666666666669,
"domInteractive": 510.33333333333331,
"renderBlockingCSS": 0,
"renderBlockingJS": 0,
"TTFB": 264.66666666666669,
"score_cache": 0,
"score_cdn": 0,
"score_gzip": 100,
"score_cookies": -1,
"score_keep-alive": 100,
"score_minify": -1,
"score_combine": -1,
"score_compress": 100,
"score_etags": -1,
"score_progressive_jpeg": -1,
"gzip_total": 118336,
"gzip_savings": 0,
"minify_total": -1,
"minify_savings": -1,
"image_total": 1667,
"image_savings": 0,
"cpu.PrePaint": 1,
"cpu.Paint": 2.6666666666666665,
"cpu.FireAnimationFrame": 0,
"cpu.FunctionCall": 5.333333333333333,
"cpu.EventDispatch": 0.66666666666666663,
"cpu.CommitLoad": 0,
"cpu.EvaluateScript": 1.3333333333333333,
"cpu.v8.compile": 0.33333333333333331,
"cpu.ParseHTML": 7,
"cpu.ResourceFetcher::requestResource": 8.3333333333333339,
"cpu.UpdateLayoutTree": 9.6666666666666661,
"cpu.Layout": 17.333333333333332,
"cpu.largestContentfulPaint::Candidate": 0,
"cpu.HitTest": 0,
"cpu.MarkDOMContent": 0,
"cpu.MarkLoad": 0,
"cpu.Idle": 586.33333333333337,
"start_epoch": 1666370740.1789024,
"date": 1666370741.9859715,
"fullyLoadedCPUms": 670,
"fullyLoadedCPUpct": 9.1036962354000011,
"domElements": 75,
"domComplete": 575.33333333333337,
"PerformancePaintTiming.first-paint": 304.63333333283333,
"PerformancePaintTiming.first-contentful-paint": 304.63333333283333,
"test_run_time_ms": 5673.333333333333,
"Colordepth": 24,
"generated-content-percent": -0.029999999999999999,
"generated-content-size": -0.040000000000000001,
"lastVisualChange": 400,
"render": 300,
"visualComplete85": 300,
"visualComplete90": 300,
"visualComplete95": 300,
"visualComplete99": 400,
"visualComplete": 400,
"SpeedIndex": 303,
"chromeUserTiming.navigationStart": 5.333333333333333,
"chromeUserTiming.fetchStart": 8.3333333333333339,
"chromeUserTiming.unloadEventStart": 275.66666666666669,
"chromeUserTiming.unloadEventEnd": 275.66666666666669,
"chromeUserTiming.commitNavigationEnd": 276,
"chromeUserTiming.domLoading": 276.33333333333331,
"chromeUserTiming.firstPaint": 309,
"chromeUserTiming.firstContentfulPaint": 309,
"chromeUserTiming.firstImagePaint": 309.33333333333331,
"chromeUserTiming.firstMeaningfulPaintCandidate": 309.33333333333331,
"chromeUserTiming.firstMeaningfulPaint": 309.33333333333331,
"chromeUserTiming.responseEnd": 510.33333333333331,
"chromeUserTiming.domInteractive": 515.66666666666663,
"chromeUserTiming.domContentLoadedEventStart": 515.66666666666663,
"chromeUserTiming.domContentLoadedEventEnd": 515.66666666666663,
"chromeUserTiming.domComplete": 580,
"chromeUserTiming.loadEventStart": 580.33333333333337,
"chromeUserTiming.loadEventEnd": 580.66666666666663,
"chromeUserTiming.LargestTextPaint": 309.66666666666669,
"chromeUserTiming.LargestImagePaint": 309.66666666666669,
"chromeUserTiming.LargestContentfulPaint": 309.66666666666669,
"chromeUserTiming.TotalLayoutShift": 0.0011286408333333333,
"chromeUserTiming.CumulativeLayoutShift": 0.0011286408333333333,
"TTIMeasurementEnd": 3657.3333333333335,
"LastInteractive": 300,
"run": 2,
"step": 1,
"effectiveBps": 313879.33333333331,
"domTime": 0,
"aft": 0,
"titleTime": 293.33333333333331,
"domLoading": 0,
"server_rtt": 0,
"maxFID": 0,
"TotalBlockingTime": 0,
"effectiveBpsDoc": 373405.33333333331,
"chromeUserTiming.LayoutShift": 109,
"avgRun": 3
}
I also use GT Metrix
And PageSpeed Insights
And W3C Validators, both HTML and CSS
End of Update
You can use curl to get the response time for the HTML. It will not give you much more.
curl will give you:
CURLINFO_SIZE_UPLOAD - Total number of bytes uploaded
CURLINFO_SIZE_DOWNLOAD - Total number of bytes downloaded
CURLINFO_SPEED_DOWNLOAD - Average download speed
CURLINFO_SPEED_UPLOAD - Average upload speed
The best way I know of to get EVERY detail of an HTTP request is to use https://www.webpagetest.org
You can use their API to get 300 performance test results per month for Free.
LINK to webpagetest.org API
Example
You curl the API and it returns the links to the details. Within those links there is a plethora of information and images.
Then curl the links you want.
LINK to HAR details of loading this page. 110,000 lines of info in 6,795,929 bytes
curl https://www.webpagetest.org/runtest.php?url=https://www.webpagetest.org&k={YOUR_API_KEY}&f=json
{
"statusCode": 200,
"statusText": "Ok",
"data": {
"testId": "210328_XiVQ_b694021b2a24ca1912dae50fb58b5861",
"jsonUrl": "https://www.webpagetest.org/jsonResult.php?test=210328_XiVQ_b694021b2a24ca1912dae50fb58b5861",
"xmlUrl": "https://www.webpagetest.org/xmlResult/210328_XiVQ_b694021b2a24ca1912dae50fb58b5861/",
"userUrl": "https://www.webpagetest.org/result/210328_XiVQ_b694021b2a24ca1912dae50fb58b5861/",
"summaryCSV": "https://www.webpagetest.org/result/210328_XiVQ_b694021b2a24ca1912dae50fb58b5861/page_data.csv",
"detailCSV": "https://www.webpagetest.org/result/210328_XiVQ_b694021b2a24ca1912dae50fb58b5861/requests.csv"
}
}
The waterfall below shows some of the details you can get.
You can get the nitty gritty details in various formats:
JSON
CSV
XML
HAR
Sample of CSV info
"type","id","request_id","ip_addr","full_url","is_secure","method","host","url","raw_id","frame_id","documentURL","responseCode","request_type","load_ms","ttfb_ms","load_start","load_start_float","bytesIn","objectSize","objectSizeUncompressed","expires","cacheControl","contentType","contentEncoding","socket","protocol","dns_start","dns_end","connect_start","connect_end","ssl_start","ssl_end","initiator","initiator_line","initiator_column","initiator_type","priority","initial_priority","server_rtt","bytesOut","score_cache","score_cdn","score_gzip","score_cookies","score_keep-alive","score_minify","score_combine","score_compress","score_etags","dns_ms","connect_ms","ssl_ms","gzip_total","gzip_save","minify_total","minify_save","image_total","image_save","cache_time","cdn_provider","server_count","created","http2_stream_id","http2_stream_dependency","http2_stream_weight","http2_stream_exclusive","tls_version","tls_resumed","tls_next_proto","tls_cipher_suite","netlog_id","server_port","final_base_page","is_base_page","load_end","ttfb_start","ttfb_end","download_start","download_end","download_ms","all_start","all_end","all_ms","index","number","cpu.EvaluateScript","cpu.v8.compile","cpu.FunctionCall","cpuTime","run","cached","renderBlocking","initiator_function",
"3","221020_AiDc05_4BQ","BE8FE33DABBBA0CF2B3AC5D78083C66E","151.101.193.69","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","1","GET","stackoverflow.com","/questions/74122109/curl-or-wget-for-measuring-page-load-time","BE8FE33DABBBA0CF2B3AC5D78083C66E","4476133D5C5FF9920B0E2BB49084B133","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","200","Document","346","198","533","533.000041","40562","40562","145896","","private","text/html","gzip","50","HTTP/2","0","172","173","344","344","533","","","","script","Highest","Highest","","2317","-1","100","100","-1","100","-1","-1","-1","-1","-1","171","189","41099","0","","","","","","Fastly","","4","1","0","256","1","TLS 1.2","False","h2","49199","41","443","1","1","879","533","731","731","879","148","173","879","706","0","1","135","26","2","164","1","0","","",
"3","221020_AiDc05_4BQ","6487.6","151.101.193.69","https://cdn.sstatic.net/Js/stub.en.js?v=0e3ada576039","1","GET","cdn.sstatic.net","/Js/stub.en.js?v=0e3ada576039","6487.6","4476133D5C5FF9920B0E2BB49084B133","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","200","Script","585","568","967","967.000067","18152","18152","53336","","max-age=604800","application/javascript","gzip","50","HTTP/2","787","967","-1","-1","-1","-1","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","25","","parser","High","High","","1831","50","100","100","-1","100","-1","-1","-1","-1","180","-1","-1","18152","0","","","","","34541","Fastly","","788","9","","","","","","","","67","","","","1552","967","1535","1535","1552","17","787","1552","765","1","2","34","8","443","485","1","0","blocking","",
"3","221020_AiDc05_4BQ","6487.5","151.101.193.69","https://cdn.sstatic.net/Js/third-party/npm/#stackoverflow/stacks/dist/js/stacks.min.js?v=facbc6b2f3b6","1","GET","cdn.sstatic.net","/Js/third-party/npm/#stackoverflow/stacks/dist/js/stacks.min.js?v=facbc6b2f3b6","6487.5","4476133D5C5FF9920B0E2BB49084B133","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","200","Script","599","568","968","968.000061","23746","23746","100193","","max-age=604800","application/javascript","gzip","50","HTTP/2","-1","-1","-1","-1","-1","-1","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","24","","parser","Low","Low","","1978","50","100","100","-1","100","-1","-1","-1","-1","-1","-1","-1","23746","0","","","","","567498","Fastly","","786","11","","","","","","","","61","","","","1567","968","1536","1536","1567","31","968","1567","599","2","3","81","2","38","121","1","0","potentially_blocking","",
"3","221020_AiDc05_4BQ","6487.7","151.101.193.69","https://cdn.sstatic.net/Shared/stacks.css?v=5ad0f45f4799","1","GET","cdn.sstatic.net","/Shared/stacks.css?v=5ad0f45f4799","6487.7","4476133D5C5FF9920B0E2BB49084B133","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","200","Stylesheet","363","188","968","968.000072","68692","68692","654577","","max-age=604800","text/css","gzip","50","HTTP/2","-1","-1","-1","-1","-1","-1","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","28","","parser","Highest","Highest","","1885","50","100","100","-1","100","-1","-1","-1","-1","-1","-1","-1","68692","0","","","","","542107","Fastly","","790","3","","","","","","","","72","","","","1331","968","1156","1156","1331","175","968","1331","363","3","4","","","","","1","0","blocking","",
"3","221020_AiDc05_4BQ","6487.8","151.101.193.69","https://cdn.sstatic.net/Sites/stackoverflow/primary.css?v=0cc30ee01b86","1","GET","cdn.sstatic.net","/Sites/stackoverflow/primary.css?v=0cc30ee01b86","6487.8","4476133D5C5FF9920B0E2BB49084B133","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","200","Stylesheet","550","374","968","968.000077","60467","60467","339815","","max-age=604800","text/css","gzip","50","HTTP/2","-1","-1","-1","-1","-1","-1","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","29","","parser","Highest","Highest","","1927","50","100","100","-1","100","-1","-1","-1","-1","-1","-1","-1","60467","0","","","","","134559","Fastly","","793","5","","","","","","","","77","","","","1518","968","1342","1342","1518","176","968","1518","550","4","5","","","","","1","0","blocking","",
"3","221020_AiDc05_4BQ","6487.9","151.101.193.69","https://cdn.sstatic.net/Shared/Channels/channels.css?v=d098999fc478","1","GET","cdn.sstatic.net","/Shared/Channels/channels.css?v=d098999fc478","6487.9","4476133D5C5FF9920B0E2BB49084B133","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","200","Stylesheet","551","393","968","968.000082","4168","4168","18913","","max-age=604800","text/css","gzip","50","HTTP/2","-1","-1","-1","-1","-1","-1","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","70","","parser","Highest","Highest","","1918","50","100","100","-1","100","-1","-1","-1","-1","-1","-1","-1","4168","0","","","","","477490","Fastly","","797","7","","","","","","","","82","","","","1519","968","1361","1361","1519","158","968","1519","551","5","6","","","","","1","0","blocking","",
"3","221020_AiDc05_4BQ","6487.4","142.251.163.95","https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js","1","GET","ajax.googleapis.com","/ajax/libs/jquery/1.12.4/jquery.min.js","6487.4","4476133D5C5FF9920B0E2BB49084B133","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","200","Script","370","175","1309","1309.000055","33951","33951","97163","Thu, 19 Oct 2023 17:18:45 GMT","public, max-age=31536000, stale-while-revalidate=2592000","text/javascript","gzip","87","HTTP/2","784","955","955","1128","1128","1309","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","23","","parser","High","High","","1870","100","100","100","-1","100","-1","-1","-1","-1","171","173","181","33951","0","","","","","31493759","Google","","783","1","0","220","1","TLS 1.3","False","h2","4865","55","443","","","1679","1309","1484","1484","1679","195","784","1679","895","6","7","64","12","31","107","1","0","blocking","",
"3","221020_AiDc05_4BQ","6487.10","151.101.193.69","https://cdn.sstatic.net/Img/teams/teams-illo-free-sidebar-promo.svg?v=47faa659a05e","1","GET","cdn.sstatic.net","/Img/teams/teams-illo-free-sidebar-promo.svg?v=47faa659a05e","6487.10","4476133D5C5FF9920B0E2BB49084B133","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","200","Image","176","175","1580","1580.000099","2368","2368","5950","","max-age=604800","image/svg+xml","gzip","50","HTTP/2","-1","-1","-1","-1","-1","-1","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","485","","parser","Low","Low","","2101","50","100","100","-1","100","-1","-1","-1","-1","-1","-1","-1","2368","0","","","","","93767","Fastly","","1579","13","0","147","1","","","","","99","443","","","1756","1580","1755","1755","1756","1","1580","1756","176","7","8","","","","","1","0","","",
"3","221020_AiDc05_4BQ","6487.20","151.101.193.69","https://cdn.sstatic.net/Img/unified/sprites.svg?v=fcc0ea44ba27","1","GET","cdn.sstatic.net","/Img/unified/sprites.svg?v=fcc0ea44ba27","6487.20","4476133D5C5FF9920B0E2BB49084B133","https://stackoverflow.com/questions/74122109/curl-or-wget-for-measuring-page-load-time","200","Image","176","174","2163","2163.000126","2852","2852","7542","","max-age=604800","image/svg+xml","gzip","50","HTTP/2","-1","-1","-1","-1","-1","-1","https://cdn.sstatic.net/Sites/stackoverflow/primary.css?v=0cc30ee01b86","","","parser","High","Low","","2173","50","100","100","-1","100","-1","-1","-1","-1","-1","-1","-1","2852","0","","","","","383944","Fastly","","2162","15","0","147","1","","","","","126","443","","","2339","2163","2337","2337","2339","2","2163","2339","176","8","9","","","","","1","0","","",
The CSV contains the following details on every request made to render the page.
type
id
request_id
ip_addr
full_url
is_secure
method
host
url
raw_id
frame_id
documentURL
responseCode
request_type
load_ms
ttfb_ms
load_start
load_start_float
bytesIn
objectSize
objectSizeUncompressed
expires
cacheControl
contentType
contentEncoding
socket
protocol
dns_start
dns_end
connect_start
connect_end
ssl_start
ssl_end
initiator
initiator_line
initiator_column
initiator_type
priority
initial_priority
server_rtt
bytesOut
score_cache
score_cdn
score_gzip
score_cookies
score_keep-alive
score_minify
score_combine
score_compress
score_etags
dns_ms
connect_ms
ssl_ms
gzip_total
gzip_save
minify_total
minify_save
image_total
image_save
cache_time
cdn_provider
server_count
created
http2_stream_id
http2_stream_dependency
http2_stream_weight
http2_stream_exclusive
tls_version
tls_resumed
tls_next_proto
tls_cipher_suite
netlog_id
server_port
final_base_page
is_base_page
load_end
ttfb_start
ttfb_end
download_start
download_end
download_ms
all_start
all_end
all_ms
index
number
cpu.EvaluateScript
cpu.v8.compile
cpu.FunctionCall
cpuTime
run
cached
renderBlocking
initiator_function

How to "inspect to file" (or to string) in Elixir?

In Elixir, we can IO.inspect anyStructure to get anyStructure's internals printed to output. Is there a similar method to output it to a file (or, as a more flexible solution, to a string)?
I've looked through some articles on debugging and io but don't see a solution. I've also tried
{:ok, file} = File.open("test.log", [:append, {:delayed_write, 100, 20}])
structure = %{ a: 1, b: 2 }
IO.binwrite(file, structure)
File.close file
but that results in
no function clause matching in IO.binwrite/2 [...]
def binwrite(device, iodata) when is_list(iodata) or is_binary(iodata)
I’ve also googled some "elixir serialize" and "elixir object to string", but haven't found anything useful (like :erlang.term_to_binary which returns, well, binary). Is there a simple way to get the same result that IO.inspect prints, into a file or a string?
There is already inspect/2 function (not the same as IO.inspect), just go with it:
#> inspect({1,2,3})
"{1, 2, 3}"
#> h inspect/2
def inspect(term, opts \\ [])
#spec inspect(
Inspect.t(),
keyword()
) :: String.t()
Inspects the given argument according to the Inspect protocol. The second
argument is a keyword list with options to control inspection.
You can do whatever you wish with the string afterwards.
You can give IO.inspect an additional param to tell it where to write to:
{:ok, pid} = StringIO.open("")
IO.inspect(pid, %{test: "data"}, label: "IO.inspect options work too \o/")
{:ok, {_in, out}} = StringIO.close(pid)
out # "IO.inspect options work too o/: %{test: \"data\"}\n"
It accepts a pid of a process to write to. StringIO provides such a process, returning you a string on close.
In Elixir, we can IO.inspect anyStructure to get anyStructure's internals printed to output.
This is not quite true; IO.inspect uses the Inspect protocol. What you see is not the internals of the struct, but whatever that struct's implementation of the Inspect protocol is written to produce. There are different options you can give to inspect, defined in Inspect.Opts, one of them is structs: false, which will print structs as maps.
For example, inspecting a range struct:
iex> inspect(1..10)
"1..10"
iex> inspect(1..10, structs: false)
"%{__struct__: Range, first: 1, last: 10, step: 1}"
To answer your question and to add to the other answers, here is a method that uses File.open!/3 to reuse an open file and log multiple inspect calls to the same file, then close the file:
File.open!("test.log", [:write], fn file ->
IO.inspect(file, %{ a: 1, b: 2 }, [])
IO.inspect(file, "logging a string", [])
IO.inspect(file, DateTime.utc_now!(), [])
IO.inspect(file, DateTime.utc_now!(), structs: false)
end)
This produces the following test.log file:
%{a: 1, b: 2}
"logging a string"
~U[2022-04-29 09:51:46.467338Z]
%{
__struct__: DateTime,
calendar: Calendar.ISO,
day: 29,
hour: 9,
microsecond: {485474, 6},
minute: 51,
month: 4,
second: 46,
std_offset: 0,
time_zone: "Etc/UTC",
utc_offset: 0,
year: 2022,
zone_abbr: "UTC"
}
You simply need to combine inspect/2 which returns a binary and File.write/3 or any other function dumping to a file.
File.write("test.log", inspect(%{a: 1, b: 2}, limit: :infinity))
Note the limit: :infinity option, without it the long structures will be truncated for better readability when inspecting to stdout.

lua - How to perform transitions in sequence

i'm trying to move an object along the points of a complex curved path with a constant velocity using transitions.
I have two tables to keep the coordinates of the points and another table with the respective time intervals for travelling each linear segment at the same speed (despite they have different lengths).
Assuming the firts and last values of the "timeTable" are 0, i tried with something similar to this:
local i = 1
local function Move()
transition.to(player, {time=timeTable[i+1], x=TableX[i+1], y=TableY[i+1]})
i=i+1
end
timer.performWithDelay( timeTable[i], Move, 0 )
It doesn't work although it no error is given.
Thanks in advance for your helpenter code here
May be this would work
local timeTable = {1, 3, 4, 1}
local TableX = {100, 400, 400, 500}
local TableY = {100, 100, 500, 500}
local i = 0
local function onCompleteMove()
i = i + 1
if timeTable[i] then
transition.to(player, {
time=timeTable[i],
x=TableX[i],
y=TableY[i],
onComplete=onCompleteMove
})
end
end
onCompleteMove() -- start moving to first point
Try
Tutorial: Moving objects along a path
Tutorial: Working with curved paths
Method for chain of transition for the same object
local function chainOfTransitions(object, params, ...)
if params then
function params.onComplete()
chainOfTransitions(object, unpack(arg))
end
transition.to(object, params)
end
end
Thanks to all of you!
I accomplished the goal by doing so:
local segmentTransition
local delta = 1
local function onCompleteMove()
i = i + delta
if timeTable[i] then
segmentTransition = transition.to(player2, {
time=timeTable[i],
x=tableX[i+delta],
y=tableY[i+delta],
onComplete=onCompleteMove
})
end
end
onCompleteMove() -- start moving

use for loop to call multiple functions in lua

I want to call multiple methods in lua that are very similar except their parameters change by one character. The way I'm doing it now works but is extremely in efficient.
function scene:createScene(event)
screenGroup = self.view
level1= display.newRoundedRect( 50, 110, 50, 50, 5 )
level1:setFillColor( 100,0,200 )
level2= display.newRoundedRect( 105, 110, 50, 50, 5 )
level2:setFillColor (100,200,0)
--and so on so forth
screenGroup:insert (level1)
screenGroup:insert (level2)
screenGroup:insert (level3)
screenGroup:insert (level4)
end
I plan on extending the screenGroop:insert method to hundreds of levels, maybe up to (level300). As you can see the way I'm doing it now is inefficient. I tried doing
for i=1, 4, 1 do
screenGroup:insert(level..i)
end
but I get the error "table expected."
The best way in this case is to probably use a table:
local levels = {}
levels[1] = display.newRoundedRect( 50, 110, 50, 50, 5 )
levels[1]:setFillColor( 100,0,200 )
levels[2] = display.newRoundedRect( 105, 110, 50, 50, 5 )
levels[2]:setFillColor (100,200,0)
--and so on so forth
for _, level in ipairs(levels) do
screenGroup:insert(level)
end
For other alternatives check the SO answer from #EtanReisner's comment.
If your 'level' tables are global, which is appears they are, you can use getfenv to index them.
for i = 1, number_of_levels do
screenGroup:insert(getfenv()["level" .. i])
end
getfenv returns the environment, with all global variables, in the form of a dictionary. Therefore, you can index it like a normal table like getfenv()["key"]

Multiple events matching algorithm

I have a task to match multiple events(facts) with each other by some their properties.
As a result of events matching some action should be generated. Action can be generated when events of all exists types were matched.
Is there any algorithm which could be used for such task? Or any direction?
Thanks
Example:
We have several events with different types and properties.
Type SEEN is cumulative event (several events could be merged for matching) and type FOUND is not.
Event 1 (SEEN):
DATE="2009-09-30"
EYES_COLOR="BLUE"
LEFT_SOCK_COLOR="RED"
Event 2 (SEEN):
DATE="2009-09-30"
EYES_COLOR="BLUE"
RIGHT_SOCK_COLOR="GREEN"
Event 3 (FOUND):
DATE="2009-09-30"
EYES_COLOR="BLUE"
LEFT_SOCK_COLOR="BLUE"
RIGHT_SOCK_COLOR="GREEN"
PLACE="MARKET"
Event 4 (FOUND):
DATE="2009-09-30"
EYES_COLOR="BLUE"
LEFT_SOCK_COLOR="GREEN"
PLACE="SHOP"
Event 5 (FOUND):
DATE="2009-09-30"
EYES_COLOR="BLUE"
PLACE="AIRPORT"
For above events such actions should be generated (by composing matched events):
Action 1_2_3:
DATE="2009-09-30"
EYES_COLOR="BLUE"
LEFT_SOCK_COLOR="RED"
RIGHT_SOCK_COLOR="GREEN"
PLACE="MARKET"
Action 2_4:
DATE="2009-09-30"
EYES_COLOR="BLUE"
LEFT_SOCK_COLOR="GREEN"
PLACE="SHOP"
Means:
Event 1 + Event 2 + Event 3 => Action 1_2_3
Event 2 + Event 4 => Action 2_4
Event 5 does not match with anything.
in your case every two events are either compatible or not; we can denote this by C(e,e'), meaning that event e is compatible with event e'. You can build a maximal set of compatible events of course iteratively; when you have a set {e1,e2,...,en} of compatible events, you can add e' to the set if and only if e' is compatible with every e1,...,en, i.e. C(ei,e') is true for all 1<=i<=n.
Unfortunately in your case the number of maximal sets of compatible events can be exponential to the number of events, because you can have e.g. events e1, e2, e3 and e4 so that they are all pair-wisely compatible but none of them is compatible with TWO other events; for this set you will already get 6 different "actions", and they overlap each other.
A simple algorithm is to have a recursive search where you add events one by one to the prospectual "action", and when you can't add any more events you register the action; then you backtrack. It's called "backtracking search". You can improve its running time then by proper datastructures for "quickly" looking up the matching events.
As in the comment, the question about SEEN/FOUND is open; I'm assuming here that the fields are merged "as is".
This pseudo-code may help: (C# syntax)
foreach (var found in events.Where(x => x.EventType == "Found"))
{
var matches = events.Where(x => x.EventType == "Seen"
&& x.Whatever == found.Whatever);
if (matches.Count() > 0)
{
// Create an action based on the single "Found" event
// and the multiple matching "Seen" events.
}
}
I'm not sure I understand the question correctly. It seems that for every FOUND event, you want to identify all matching SEEN events and merge them? Python code:
# assume events are dictionaries, and you have 2 lists of them by type:
# (omitting DATE because it's always "2009-09-03" in your example)
seen_events = [
{
"EYES_COLOR": "BLUE",
"LEFT_SOCK_COLOR": "RED",
},
{
"EYES_COLOR": "BLUE",
"RIGHT_SOCK_COLOR": "GREEN",
},
]
found_events = [
{
"EYES_COLOR": "BLUE",
"LEFT_SOCK_COLOR": "BLUE",
"RIGHT_SOCK_COLOR": "GREEN",
"PLACE": "MARKET",
},
{
"EYES_COLOR": "BLUE",
"LEFT_SOCK_COLOR": "GREEN",
"PLACE": "SHOP",
},
{
"EYES_COLOR": "BLUE",
"PLACE": "AIRPORT",
},
]
def do_action(seen_events, found):
"""DUMMY"""
for seen in seen_events:
print seen
print found
print
# brute force
for found in found_events:
matching = []
for seen in seen_events:
for k in found:
if k in seen and seen[k] != found[k]:
break
else: # for ended without break (Python syntax)
matching.append(seen)
if matching:
do_action(matching, found)
which prints:
{'EYES_COLOR': 'BLUE', 'RIGHT_SOCK_COLOR': 'GREEN'}
{'EYES_COLOR': 'BLUE', 'PLACE': 'MARKET', 'LEFT_SOCK_COLOR': 'BLUE', 'RIGHT_SOCK_COLOR': 'GREEN'}
{'EYES_COLOR': 'BLUE', 'RIGHT_SOCK_COLOR': 'GREEN'}
{'EYES_COLOR': 'BLUE', 'PLACE': 'SHOP', 'LEFT_SOCK_COLOR': 'GREEN'}
{'EYES_COLOR': 'BLUE', 'LEFT_SOCK_COLOR': 'RED'}
{'EYES_COLOR': 'BLUE', 'RIGHT_SOCK_COLOR': 'GREEN'}
{'EYES_COLOR': 'BLUE', 'PLACE': 'AIRPORT'}
Right, this is not effecient - O(n*m) - but does this even describe the problem correctly?

Resources