Laravel download pdf file stored in separate Laravel app - laravel

I have 2 Laravel(5.8) apps. One is a user application, the other is more of an API.
The Api has pdfs stored in the storage directory, I need to be able to allow a user to download the pdfs in the other application.
Really got no clue how to send the file over from app to the other.
The user app makes an api to the api with relevant file ids and things (fine), just can't work out how to send the file back, and then download it on the other end.
Tried things like return response()->stream($pdf) on the api and return response()->download($responeFromApiCall) and loads of other things but really getting nowhere fast.
Any advice welcome.

The laravel code you posted is basically correct, you can use one of stream(), download() or file() helper to serve a file.
file()
Serve a file using a binary response, the most straightforward solution.
// optional headers
$headers = [];
return response()->file(storage_path('myfile.zip'), $optionalHeaders);
You can also use your own absolute file path instead of the storage_path helper.
download()
The download helper is similar to file(), but it also sets a Content-Disposition header which will result in a download-popup if you retrieve the file using your browser.
$location = storage_path('myfiles/invoice_document.pdf');
// Optional: serve the file under a different filename:
$filename = 'mycustomfilename.pdf';
// optional headers
$headers = [];
return response()->download($location, $filename, $headers);
stream()
The stream() helper is a bit more complex and results in reading and serving a file in chunks. This can be used for situations where the filesize is unknown (like passing through another incoming stream from an API). It results in a Transfer-Encoding: chunked header which indicates that the data stream is divided into a series of non-overlapping chunks:
// optional status code
$status = 200;
// optional headers
$headers = [];
// The stream helper requires a callback:
return response()->stream(function() {
// Load a file from storage.
$stream = Storage::readStream('somebigfile.zip');
fpassthru($stream);
if(is_resource($stream)) {
fclose($stream);
}
}, $status, $headers);
Note: Storage::readStream takes the default storage disk as defined in config/filesystems.php. You can change disks using Storage::disk('aws')->readStream(...).
Retrieving your served file
Say your file is served under GET example.com/file, then another application can retrieve it with curl (assuming PHP). A popular wrapper for this would be Guzzle:
$client = new \GuzzleHttp\Client();
$file_path = __DIR__ . '/received_file.pdf';
$response = $client->get('http://example.com/file', ['sink' => $file_path]);
You can derive the filename and extension from the request itself by the way.
If your frontend is javascript, then you can retrieve the file as well but this another component which I dont have one simple example for.

So if I understand correctly you have two systems: user app and api. You want to serve a pdf from user app to the actual user. The user app does a call to api, which works fine. Your issue is converting the response of the api to a response which you can serve from your user app, correct?
In that case I'd say you want to proxy the response from the api via the user app to the user. I don't know what the connection between the user app and the api is, but if you use Guzzle you could look at this answer I posted before.

Here are the steps you should follow to get the PDF:
Make an API call using AJAX ( you are probably already doing it ) from your public site ( User site ) to the API server, with file ID.
API server fetches the PDF, copy it to the public/users/ directory, generate the URL of that PDF file.
Send the URL as a response back to the User site. Using JS add a button/ link in the DOM, to that PDF file.
Example:
jQuery.post(ajaxurl, data, function(response) {
var responseData = JSON.parse(response);
var result = responseData.result;
var pdf_path = responseData.pdf_path;
$(".nametag-loader").remove();
$(".pdf-download-wrapper").append("<a class='download-link' href='"+ pdf_path +"' target='_blank'>Download</a>");
});

Related

Using a lambda API as a reverse proxy for content stored in S3?

I would like to apply a reverse proxy on top of S3 to make content serving decisions based on the incoming request.
The S3 bucket is set to website mode and hosted publically.
I'll obviously have more logic to determine where I am getting the files from, but hopefully this will illustrate my desire.
This is using JavaScript, happy to use Go as well.
The following code does not work, but I'm not sure how best to get it working. Can I just send an arrayBuffer through?
module.exports.handler = async (event, context) => {
const data = await fetch(S3WebsiteURL + event.path)
const buffer = await data.arrayBuffer()
return {
headers: data.headers,
body: buffer,
statusCode: 200
}
}
I would use https://www.npmjs.com/package/request-promise
var rp = require('request-promise');
const htmlString = await rp(S3WebsiteURL + event.path);
return {
body: htmlString,
statusCode: 200
}
Try without headers and if it works, add header support.
I've found it difficult to use Lambda to proxy data - using API gateway, at least, it expects binary data in base-64 format at various points depending on how you set it up. They've improved things since I tried to do it that way last, so hopefully someone else can answer based on more recent experience.
If your content serving decisions are limited to access control (you don't need to transform the data you're serving), then you can use your lambda as a URL provider instead of a content provider - switch public sharing of the S3 bucket off, access the items using the S3 API, and have your lambda call S3.getSignedUrl() to get a link to the actual content. This means that only the callers of the lambda will have a valid URL to the content you want to protect, and depending on your application you can set the timeout on the pre-signed URL to be short enough you don't have to worry about it being shared.
The advantage here is that since the content itself doesn't get proxied through the lambda, your lambda runtime and memory costs can be lower and performance should be better.

Unpredictable request payload structure but Need to create user from it

My system can get request from multiple sites and request payload will have user details. However, the structure or format of payload can be different and not known. My system shall extract out the details from it and create user in to my system.
e.g. one request source will send first name as fname='abc' and other might send it as first_name='abc'.
I have to implement it in PHP laravel
This would resolve your issue:
$name = null;
if(!empty($data['fname'])) {
$name = $data['fname'];
} elseif(!empty($data['first_name']) {
$name = $data['first_name'];
}
Then you can execute validation, autofill and things like that to create or modify your user model.

Laravel + Image Intervention : Force download of file that is not saved

I want to simply upload files , resize them and then force a download for every uploaded file. I do not want to save the files.
Resizing etc. works fine, however I cannot manage to force the download of the new file.
$content = $image->stream('jpg');
return response()->download($content, $name);
The shown snippet results in
is_file() expects parameter 1 to be a valid path, string given
Most probably because $content is not a path but the actual data.
Is there a way to enforce the download without saving it first?
Try this:
$url = "https://cdn.psychologytoday.com/sites/default/files/blogs/38/2008/12/2598-75772.jpg";
$name = 'happy.jpg';
$image = Image::make($url)->encode('jpg');
$headers = [
'Content-Type' => 'image/jpeg',
'Content-Disposition' => 'attachment; filename='. $name,
];
return response()->stream(function() use ($image) {
echo $image;
}, 200, $headers);
Here's what we're doing. We're using Intervention Image to encode that image into the correct file type format for us. Then we're setting the browser headers ourselves to tell the browser what type of file it is and that we want the browser to treat it as a downloadable attachment. Lastly, we return the response using Laravel's stream() method and use a closure. We tell the closure it has access to the encoded data of the image with use ($image) and then we simply echo that raw data onto the page. From there we tell the browser the HTTP code is 200 (OK), and then give it our headers to tell the browser what to do.
Your problem was that Laravel's handy download() method is expecting it to be a file on the file system. So we have to stream it and manually set the headers ourselves.
You will need to write some additional code to handle different file extensions for the encode() method Intervention uses as well as the Content-Type returned. You can find a list of all available mime types here.
I hope this helps.
Edit: Later versions of Laravel introduced the deleteFileAfterSend() method on the response object. If you are okay with temporarily saving the file to the server you can do this:
return response()->download($path)->deleteFileAfterSend();

Meteor: Uploading an image and saving to the database as base64 string

I have a Meteor app and I am interested in getting image upload to work in the simplest possible manner.
The simplest manner I can come up with is to somehow convert the image to a base64 string on the client and the save it to the database as a string.
How is it possible to convert an image on the users filesystem to a base64 string and then save it to the database?
You can use an HTML5 file input :
HTML
<template name="fileUpload">
<form>
<input type="file" name="imageFile">
<button type="submit" disabled={{submitDisabled}}>
Submit
</button>
</form>
</template>
Then listen to the change event and use a FileReader to read the local file as a base64 data url that we're going to store in a reactive var :
Template.fileUpload.created=function(){
this.dataUrl=new ReactiveVar();
};
Template.fileUpload.events({
"change input[type='file']":function(event,template){
var files=event.target.files;
if(files.length===0){
return;
}
var file=files[0];
//
var fileReader=new FileReader();
fileReader.onload=function(event){
var dataUrl=event.target.result;
template.dataUrl.set(dataUrl);
});
fileReader.readAsDataURL(file);
}
});
Then we can use the reactive var value to allow/disallow form submission and send the value to the server :
Template.fileUpload.helpers({
submitDisabled:function(){
return Template.instance().dataUrl.get();
}
});
Template.fileUpload.events({
"submit":function(event,template){
event.preventDefault();
//
Meteor.call("uploadImage",template.dataUrl.get());
}
});
You will need to define a server method that saves the dataUrl to some collection field value, what's cool about dataUrls is that you can use them directly as an image tag src.
Note that this solution is highly unscalable as the image data won't be cachable and will pollute the app database regular communications (which should only contain text-like values).
You could fetch the base64 data from the dataUrl and upload it to Google Cloud Storage or Amazon S3 and serve the files behind a CDN.
You could also use services that do all of this stuff for you like uploadcare or filepicker.
EDIT :
This solution is easy to implement but comes with the main drawback that fetching large base64 strings from mongodb will slow your app from fetching other data, DDP communications are always live and not cachable at the moment so your app will always redownload image data from the server.
You wouldn't save dataUrls to Amazon, you would save the image directly, and it would be fetched by your app using an Amazon URL with a cachable HTTP request.
You have two choices when it comes to file upload : you can upload them directly from the client using specific javascript browser APIs or you can upload them within Node.js (NPM modules) APIs in the server.
In the case you want to upload from the server (which is usually simpler because you don't need to require that the users of your apps authenticate against third party services, only your server will act as a trusted client to communicate with Amazon API), then you can send the data that a user want to upload through a method call with a dataUrl as argument.
If you don't want to dive into all this stuff consider using uploadcare or filepicker, but keep in mind that these are paid services (as is Amazon S3 BTW).
Not sure if this is the best way, but you can easily do this with a file reader. In the Template event handler where you get the file contents, you can pass the file to the reader and get back a base64 string. For example, something like this:
Template.example.events({
'submit form': function (event, template) {
var reader = new FileReader();
var file = template.find('#fileUpload').files[0]; // get the file
reader.onload = function (event) {
// event.target.result is the base64 string
}
reader.readAsDataURL(file);
}
});

How to use Qooxdoo's Stores to request an ajax response with a POST?

In the tutorial enter link description here they only show this:
var url = "http://api.twitter.com/1/statuses/public_timeline.json";
this.__store = new qx.data.store.Jsonp(url, null, "callback");
But I would need to communicate with my own server, so I've made some changes: (url and Jsonp to Json)
var url = "http://127.0.0.1:8000/";
this.__store = new qx.data.store.Json(url);
But I would need to be able to send some information to the server when the store make the request like:
{serviceToUseOnServer: 'articles', argument1: 'xxx'}
This is like a POST request, but I don't really know how to send that data to the server using qooxdoo's Store models. Please don't tell me to use GET and encode all that data in an url.
You got the possibility to configure the request object used to send the data. Just add a delegate to the store and implement the configureRequest method [1].
[1] http://demo.qooxdoo.org/current/apiviewer/#qx.data.store.IStoreDelegate~configureRequest

Resources