I am creating a query application for users to filter several tables down and then download the results as CSV files, doing this with small return results has proven very easy, but several of the return datasets will be up and above 300k+ rows of data.
Time out errors were being thrown, so I need a new approach. Application is written in laravel.
I was able to run a raw query and created a csv with 380k rows of data, but the --secure-file-priv forced me to put the file into a specific place. I need the download file to be accessible to the user who filtered the data down.
my current 3 approaches as follows:
// $performance = DB::select("SELECT * from performance_datas INTO OUTFILE '/var/lib/mysql-files/performance_data2.csv' FIELDS ENCLOSED BY '\"' TERMINATED BY ';' ESCAPED BY '\"' LINES TERMINATED BY '\r\n' ;");
This raw query created the intended file, but I don't know how to make this accessible to the user to download.
Approach #2:
$headers = array(
'Content-Type' => 'text/csv',
'Cache-Control' => 'must-revalidate, post-check=0, pre-check=0',
'Content-Disposition' => 'attachment; filename=performances.csv',
'Expires' => '0',
'Pragma' => 'public',
);
$response = new StreamedResponse(function() use($filename){
// Open output stream
$handle = fopen($filename, 'w');
// Add CSV headers
fputcsv($handle, [
"id", "ref", "DataSet", "PubID", "TrialID", "TrtID", "SubjectID", "Site_Sample", "Day_Sample",
"Time_Sample", "VarName", "VarValue", "VarUnits", "N", "SEM", "SED", "VarType"
]);
PerformanceData::all()
->chunk(1500, function($datas) use($handle) {
foreach ($datas as $data) {
// Add a new row with data
fputcsv($handle, [
// put data in file
]);
}
});
// Close the output stream
fclose($handle);
}, 200, $headers);
This approach timed out. I used Laravel to get ::all() in this case, this would be the largest data set for this particular table.
Approach 3 was just different variations of approach 2, with the same results of timing out.
I need the user to be able to create a csv and download it after its ready, or create and download it directly.
Open to any suggestions!
Related
I use GuzzleHttp to send data via "_bulk" to an Elastic Search index. It is only a small dataset of 850 records. When I transfer the data record by record, I get an error message for 17 records. That's fine for me, so I can fix the errors.
But when I use _bulk, I do not get any error message at all. The 17 incorrect records are just ignored and are missing inside the index. How can I get an error message here? Are there some kind of options that I can use? Any ideas?
The endpoint is:
Here are my main code parts:
$jsonData = "xxxxx"; // the payload for the request
$elasticUrl = "https://xxxx.xx/xxxxx/_doc/_bulk";
$client = new Client([
"verify" => false, // disable ssl certificate verification
"timeout" => 600, // maximum timeout for requests
"http_errors" => false // disable exceptions
]);
$header = ["Content-Type" => "application/json"];
$result = $client->post($elasticUrl,
[
"headers" => $header,
"body" => $jsonData
]
);
if ($result->getStatusCode() != 200) {
$ret = "Error ".$result->getStatusCode()." with message: ".$result->getReasonPhrase();
}
A bulk request will always succeed with HTTP 200.
However, in the bulk response, you should see an indication whether each item succeeded or not. If you see errors: true in the response, then you know some of the items could not get indexed and looking into the items array, you'll find the error for the corresponding items.
As #Val pointed out the use of $response->getBody() gives the needed information:
$body = (string) $result->getBody();
$bodyArray = json_decode($body, true);
if ($bodyArray["errors"]) {
$retArray = [];
foreach ($bodyArray["items"] as $key => $item) {
if (isset($item["create"]["error"])) {
$retArray[] = $item["create"]["error"]["reason"].": ".json_encode($data[$key]);
}
}
$ret = implode(", ", $retArray);
}
As side note: in $data I keep the data as php array before sending it to Elastic Search.
I have been attempting to upload an image to cloudinary, which is pretty easy. My problem is how do I go about saving the url to the database instead of the image? I was suppose to use https://github.com/Silvanite/nova-field-cloudinary, but the documentation is pretty slim. Also, I would like to save the image with original file name (storeOriginalName).
CloudinaryImage::make('Featured Image')
Nova's version:
Image::make('Featured Image')
->disk('cloudinary')
https://nova.laravel.com/docs/2.0/resources/file-fields.html#images
https://cloudinary.com/documentation/php_image_and_video_upload#upload_response
https://laravel.com/docs/5.7/requests#storing-uploaded-files
Update. This works for storing original file name, but still not sure how to grab url and save it to featured_image column:
CloudinaryImage::make('Featured Image')
->storeAs(function (Request $request) {
return $request->featured_image->getClientOriginalName();
}),
You shouldn't need to store the remote URL with Cloudinary. The public id returned by the component is used to generate the final URL when you output the image somewhere using one of the ways described in the documentation ...
// Using the helper (with transformation)
return cloudinary_image($this->featured_image, [
"width" => 200,
"height" => 200,
"crop" => "fill",
"gravity" => "auto",
])
// Using the Storage Facade (without transformation)
return Storage::disk('cloudinary')->url($this->featured_image);
// Using the Storage Facade (with transformation)
return Storage::disk('cloudinary')->url([
'public_id' => $this->featured_image,
'options' => [
"width" => 200,
"height" => 200,
"crop" => "fill",
"gravity" => "auto",
],
])
Or you could generate the URL yourself as per the Cloudinary documentation https://cloudinary.com/documentation/image_optimization
It would be helpful if you could expand on why you need to save the full URL as there may be an alternative solution.
The upload response contains an url field. Here is an example:
{
public_id: 'sample',
version: '1312461204',
width: 864,
height: 564,
format: 'jpg',
created_at: '2017-08-10T09:55:32Z',
resource_type: 'image',
tags: [],
bytes: 9597,
type: 'upload',
etag: 'd1ac0ee70a9a36b14887aca7f7211737',
url: '<url>',
secure_url: '<secure_url>',
signature: 'abcdefgc024acceb1c1baa8dca46717137fa5ae0c3',
original_filename: 'sample'
}
I'm trying to make a classified text, and I'm having problem turning
(class1 (subclass1) (subclass2 item1 item2))
To
(class1 (subclass1 item1) (subclass2 item1 item2))
I have no idea to turn text above to below one, without caching subclass1 in memory. I'm using Perl on Linux, so any solution using shell script or Perl is welcome.
Edit: I've tried using grep, saving whole subclass1 in a variable, then modify and exporting it to the list; but the list may get larger and that way will use a lot of memory.
I have no idea to turn text above to below one
The general approach:
Parse the text.
You appear to have lists of space-separated lists and atoms. If so, the result could look like the following:
{
type => 'list',
value => [
{
type => 'atom',
value => 'class1',
},
{
type => 'list',
value => [
{
type => 'atom',
value => 'subclass1',
},
]
},
{
type => 'list',
value => [
{
type => 'atom',
value => 'subclass2',
},
{
type => 'atom',
value => 'item1',
},
{
type => 'atom',
value => 'item2',
},
],
}
],
}
It's possible that something far simpler could be generated, but you were light on details about the format.
Extract the necessary information from the tree.
You were light on details about the data format, but it could be as simple as the following if the above data structure was created by the parser:
my $item = $tree->{value}[2]{value}[1]{value};
Perform the required modifications.
You were light on details about the data format, but it could be as simple as the following if the above data structure was created by the parser:
my $new_atom = { type => 'atom', value => $item };
push #{ $tree->{value}[1]{value} }, $new_atom;
Serialize the data structure.
For the above data structure, you could use the following:
sub serialize {
my ($node) = #_;
return $node->{type} eq 'list'
? "(".join(" ", map { serialize($_) } #{ $node->{value} }).")"
: $node->{value};
}
Other approaches could be available depending on the specifics.
I am using Learning Locker (Learning Record Store).
I succeed inserting statements to it via the REST API.
But I did not succeed fetching statements from it.
How do I preform a query on statements? REST API?
I used tinCanPhp library. This is how you establish a connection with the Learning Locker database and query it in PHP. For example:
$lrs = new TinCan\RemoteLRS(
'endpoint/public/data/xAPI/',
'1.0.1',
'username',
'key'
);
$actor = new TinCan\Agent(
[ 'mbox' => 'mailto:dikla#gmail.com' ]
);
$verb = new TinCan\Verb(
[ 'id' => 'http://adlnet.gov/expapi/verbs/progressed' ]
);
$activity = new TinCan\Activity(
[ 'id' => 'http://game.t-shirt' ]
);
$statement = new TinCan\Statement(
[
'actor' => $actor,
'verb' => $verb,
'object' => $activity,
]
);
//get All Actor activity by his unique id
function getAllActorActivity($actorUri){
global $lrs;
$actor = new TinCan\Agent(
[ 'mbox' => $actorUri ]//actorUri should look like this 'mailto:dikla#gmail.com'
);
$answer=$lrs->queryStatements(['agent' => $actor]);
return $answer;
}
If it's via javascript you can use the ADL xAPI Wrapper. It simplifies communication with an LRS... https://github.com/adlnet/xAPIWrapper#get-statements
In general you do a GET request on endpoint /statements... try just that first and see if you get a json response with a "statements" and a "more" property. Then if that works, you can narrow down results with filters. See the spec for the details and options. https://github.com/adlnet/xAPI-Spec/blob/master/xAPI.md#stmtapiget
try that curl command.. it should return a statement result albeit from the ADL LRS...
curl --user tom:1234 GET https://lrs.adlnet.gov/xapi/statements
When I was developing in Laravel4 Beta3, I used to get JSON POST data from a service using Input::json() function, But when I updated to Laravel4 Beta4, I am getting following error:
Notice: Undefined property: Symfony\Component\HttpFoundation\ParameterBag::$productName in /Applications/MAMP/htdocs/commonDBAPI/app/controllers/UserController.php line 47
Does any one have any idea, what could be the reason.
Thanks,
You can access just the JSON using Input::json()->all().
JSON input is also merged into Input::all() (and Input::get('key', 'default')) so you can use the same interface to get Query string data, Form data and a JSON payload.
The documentation does not yet reflect all changes because Laravel 4 is still in beta and the focus is on getting the code right, the documentation will be updated ready for the public release.
How is JSON merged with Input::all()?
Consider the following JSON:
{
'name': 'Phill Sparks',
'location': 'England',
'skills': [
'PHP',
'MySQL',
'Laravel'
],
'jobs': [
{
'org': 'Laravel',
'role': 'Quality Team',
'since': 2012
}
]
}
When merged into Laravel's input the JSON is decoded, and the top-level keys become top-level keys in the input. For example:
Input::get('name'); // string
Input::get('skills'); // array
Input::get('jobs.0'); // object
Input::all(); // Full structure of JSON, plus other input
Yup they changed it to return a ParameterBag object switch your code to Input::json()->all()
For :
{
"name":"Olivier",
"title":"Just a try"
}
Try this :
$input = Input::json()->all();
return $input['name'];