Where can I access serverId? - filepond

I'd like to retrieve the serverId of an uploaded file. I tried to retrieve it in onupdatefiles, since it has a parameter which is fileitems. I assumed that I could use fileItems[0].serverId to fetch the uploaded file's servierId, but it showed null.
Who knows where am I wrong?

Sadly the event FilePond:updatefiles is called before files are (successfully) uploaded. Thus the file object has a the serverId null.
I had a similar issue and I worked around that with the following code:
document.addEventListener('FilePond:processfile', e => {
global.newFileIds = global.filePond.getFiles().map(x => x.serverId)
})
document.addEventListener('FilePond:removefile', e => {
global.newFileIds = global.filePond.getFiles().map(x => x.serverId)
})
FYI
FilePond:updatefiles is called when new files are added to the upload queue . In my case this was called too early (and twice).
FilePond:processfile is called after a file is successfully uploaded
FilePond:removefile is called when a file is removed but does not have a serverId ever (this is by design !?)

Related

TYPO3 Extbase - Download file is cut off to 40 KB if user is not logged in to the front end

I created a duplicate of a download extension from my colleague which is basically an extension which just provides files to download in the back end.
Problem:
If i try to download a file while the extension is only accessible after login to the back end, then it works perfectly fine
however if I open a private browser window where I am not logged in to the back end, then it always cuts off the file and only download the first 40 KB ... even though it is normally 10 MB. Why is the file cut off?
I can download small files ( < 40KB ) perfectly without them getting cutted off.
NOTE:
Before I edited the extension, the download worked perfectly, even if not logged in to the back end! And the download was triggered the same way
Currently I am comparing the code, but from the logic it looks ok, since I did not changed much (added a new model, renamed the extension and some other stuff)
Does someone have a clue what can lead to this problem?
This is the relevant part in my download controller where I first get the public url of the file by passing the fid of the file and then trigger the download by sending headers.
...
if ($this->request->hasArgument('fid')) {
$this->fid = $this->request->getArgument('fid');
}
if ($this->request->hasArgument('cid')) {
$this->cid = $this->request->getArgument('cid');
}
$fileobj = $this->fileRepository->findByUid($this->fid);
if ($fileobj->getFile() !== null) {
$downloadFilePath = $fileobj->getFile()->getOriginalResource()->getPublicUrl();
if (file_exists($downloadFilePath)) {
$fileCounter = (int)$fileobj->getCounter();
$fileobj->setCounter(++$fileCounter);
$oldChecksum = $fileobj->getChecksume();
$groesse = filesize($downloadFilePath);
if (isset($oldChecksum)) {
$checksum = sha1_file($downloadFilePath);
$fileobj->setChecksume($checksum);
}
// update fileobj
$this->fileRepository->update($fileobj);
// Unset fileobj before persists, otherwise there will be also changes
$this->persistenceManager->persistAll();
// If file exists, force download
$fileName = basename($downloadFilePath);
$this->response->setHeader('Content-Type', "application/force-download", TRUE);
$this->response->setHeader('Content-Disposition', 'attachment; filename=' . $fileName, TRUE);
$this->response->setHeader('Content-Length', $groesse, TRUE);
#readfile($downloadFilePath);
$this->response->sendHeaders();
return true; //i can also delete this line, since it is never reached.
} else {
//send emails to everyone who is entered in the address list in the extension configuration.
$this->sendEmails('missing_file', $fileobj);
$this->redirect(
'list',
'Category',
NULL,
array(
'missing' => array(
'fileId' => $this->fid,
'category' => $this->cid
)
)
);
}
}
The 40 KB file does not contain anything that shouldn't be there, it is just cut off. I tested it by writing alot of numbers in a file line by line and download it, result: only a couple thousand numbers are in the file instead of all numbers.
I tried it with both, files stored at a FTP Server and files stored in user_upload, same result.
Here you can see the 40 KB file:
http://pasteall.org/459911
Snippet (in case if the link is down):
<ul>
<li>0</li>
<li>1</li>
<li>2</li>
<li>3</li>
<li>4</li>
<li>5</li>
<li>6</li>
<li>7</li>
<li>8</li>
<li>9</li>
//Cut because stackoverflow does not allow me to post such big texts
...
<li>3183</li>
<li>3184</li>
<li>3185</li>
<li>3186</li>
<li
You can see that it stops downloading the rest, the question is: why?
UPDATE:
I changed it to this:
// If file exists, force download
$fileName = basename($downloadFilePath);
$this->response->setHeader('Content-Type', "application/force-download", TRUE);
$this->response->setHeader('Content-Disposition', 'attachment; filename=' . $fileName, TRUE);
$this->response->setHeader('Content-Length', $groesse, TRUE);
ob_start();
ob_flush();
flush();
$content = file_get_contents($downloadFilePath);
$this->response->setContent($content);
$this->response->sendHeaders();
return true; //i can also delete this line, since it is never reached.
Now the file is downloaded completly, but the file is now wrapped inside the html from the template. it gets rendered inside the fluid variable mainContent.
Like this:
...
<!--TYPO3SEARCH_begin-->
<div class="clearfix col-sm-{f:if(condition:'{data.backend_layout} == 4',then:'12',else:'9')} col-md-{f:if(condition:'{data.backend_layout} == 4',then:'9',else:'6')} col-lg-{f:if(condition:'{data.backend_layout} == 4',then:'10',else:'8')} mainContent">
<f:format.raw>{mainContent}</f:format.raw>
</div>
<!--TYPO3SEARCH_end-->
...
It gets weirder and weirder...
I finally solved the problem. I just had to execute exit or die after sending the headers:
#readfile($downloadFilePath);
$this->response->sendHeaders();
exit;
NOTE: If you exit your code with exit or die then typo3 session set with e.g. $GLOBALS['TSFE']->fe_user->setKey("ses", "token", DownloadUtility::getToken(32)); won't work anymore if not logged in to the backend! Use $GLOBALS['TSFE']->fe_user->setAndSaveSessionData("token", DownloadUtility::getToken(32)); in this case if no log in should be required.
Now it works even if not logged in to the front end.
But I still don't know why the download worked without being cut off while being logged in to the backend, even though the exit statement was missing. Thats extremly weird and we have no explanation.

Can't 'file.open.read' a url within a ruby if-block

I want to create a ruby script which will take barcodes from a text file, search a webservice for that barcode and download the result.
First I tried to test the webservice download. In a file when I hardcode the query things work fine:
result_download = open('http://webservice.org/api/?query=barcode:78686112327', 'User-Agent' => 'UserAgent email#gmail.com').read
It all works fine.
When I try to take the barcode from a textfile and run the query I run into problems.
IO.foreach(filename) {|barcode| barcode
website = "'http://webservice.org/api/?query=barcode:"+barcode.to_str.chomp + "', 'User-Agent' => 'UserAgent email#gmail.com'"
website = website.to_s
mb_metadata = open(website).read
}
The result of this is:
/home/user/.rvm/rubies/ruby-2.3.0/lib/ruby/2.3.0/open-uri.rb:37:in `initialize': No such file or directory # rb_sysopen - http://webservice.org/api/?query=barcode:78686112327', 'User-Agent' => 'UserAgent email#gmail.com' (Errno::ENOENT)
I can't figure out if this problem occurs because the string I generate somehow isn't a valid url and ruby is trying to open a non-existent file, or is the issue that I am doing all this in a for loop and the file/url doesn't exist there. I have tried using open(website).write instead of open(website).read but that produces the same error.
Any help would be much appreciated.
The error message you get explicitly states, that there is no such file:
http://webservice.org/api/?query=barcode:78686112327', 'User-Agent' => 'UserAgent email#gmail.com'.
You try to pass all the parameters to open method using 1 big string (website), which is wrong. You should do it like that.
IO.foreach(filename) do |barcode|
website = "http://webservice.org/api/?query=barcode:#{barcode.to_str.chomp}"
mb_metadata = open(website, 'User-Agent' => 'UserAgent email#gmail.com').read
end

Storage::get( ) using Amazon S3 returns false

Combining both Intervention Image and Amazon S3, I'd like to be able to pull a file from S3 and then use Image to do some cropping. This is what I have so far, why does Storage::get() return false?
$path = 'uploads/pics/123.jpeg';
$exists = Storage::disk('s3')->exists($path); // returns true
$image = Storage::disk('s3')->get($path); // returns false
From the S3 side of things, the bucket permissions are set to 'Everyone', the Storage::getVisibility() returns public... I'm not sure why I can't load the image as if it were a local image.
After digger deeper into the code I found this message
"Error executing "GetObject" on "file"; AWS HTTP error: file_exists(): open_basedir restriction in effect. File(/etc/pki/tls/certs/ca-bundle.crt) is not within the allowed path(s): (paths)"
First it seems that my server don't have this file, but it have! The file is located in another folder.
/etc/ssl/certs/ca-certificates.crt
So, to solve my problem in Ubuntu I have to create this folder /etc/pki/tls/certs and after that, symlink to the correct file:
cd /etc/pki/tls/certs;
sudo ln -s /etc/ssl/certs/ca-certificates.crt ca-bundle.crt;
Edit your php.ini and add /etc/pki/tls/certs/ca-bundle.crt to the open_basedir configuration.
Restart your php server!
For me it solves the problem, hope it helps!
Since Dec 2020, Amazon S3 now provides strong read-after-write consistency in all regions, rendering this answer obsolete. For more details, refer to the Amazon S3 Strong Consistency page.
This shouldn't be an issue anymore.
The old answer below has been kept for reference purposes & for providing a reason for the bounty previously awarded.
From the Amazon S3 documentation:
Amazon S3 provides read-after-write consistency for PUTS of new objects in your S3 bucket in all regions with one caveat. The caveat is that if you make a HEAD or GET request to the key name (to find if the object exists) before creating the object, Amazon S3 provides eventual consistency for read-after-write.
Given the example code where the path is static and the exists call is made prior to the get, I'm conjecturing that you're being hit with eventual consistency. Your get should eventually return. Try:
$backoff = 0;
while (false === ($image = Storage::disk('s3')->get($path))) {
if (5 < $backoff) {
throw new \RuntimeException;
}
sleep(pow(2, $backoff++));
}
If you are using laravel 5 , than apply for this method.
$photo = $attributes['banner_image'];
$s3 = AWS::createClient('s3');
try {
$response = $s3->putObject([
'Bucket' => 'gfpressreleasephotos',
'Key' => str_random(8) . '.' . str_replace(' ', '-', strtolower($photo->getClientOriginalName())),
'Body' => fopen($photo->getRealPath(), 'r'),
'ACL' => 'public-read',
]);
if ($response->get('ObjectURL') != null) {
$photourl = $response->get('ObjectURL');
} else {
$photourl = $response->get('Location');
}
$attributes['banner_image'] = $photourl;
} catch (S3Exception $e) {
return "There was an error uploading the file.\n";
}

Deleting Files in Shared Google drive using Apple script

I have a shared folder and one of the editor usually add files into it. I want to keep flushing the folder by the following code by capturing its last change date. Its throwing error and it seems to me that as i am NOT the owner of the file, i cannot able to delete. Is there any way out?
function 7DayFlush()
{
// Log the files names and its last change info for the mentioned folder (by id)
// Enter the ID between Bracket
var mfolder = DriveApp.getFolderById('<i keep folder id here>');
// Following will get files from the folder.
var lfiles = mfolder.getFiles();
while (lfiles.hasNext()) {
var file = lfiles.next();
if (new Date() - file.getLastUpdated() > 7 * 24 * 60 * 60 * 1000) {
//Following will delete the files which matches the above condition which is older than 7days in the specified folder.
Logger.log(file.getName()+'----'+file.getLastUpdated());
//here is the error comes up.. help me.
file.setTrashed(true);
//here is the error comes up.. help me.
}
}
}

How to upload an image to Amazon S3 into a folder in ruby?

I am trying to do it like this:
AWS.config(
:access_key_id => '...',
:secret_access_key => '...'
)
s3 = AWS::S3.new
bucket_name = 'bucket_name'
key = "#{File.basename(avatar_big)}"
s3.buckets[bucket_name].objects[key].write(:file => avatar_big_path)
This working well for a file, the file is uploaded to the root of the set up bucket.
However, how to upload it into the foloder photos that is located in root?
I've tried
key = "photos/#{File.basename(avatar_big)}"
but this doesn't work.
EDIT: error message
Thank you
I had the same issue as the OP. This is what worked for me:
key = "photos/example.jpg"
bucket = s3.buckets[bucket_name]
filepath = Pathname.new("path/to/example.jpg")
o = bucket.objects[key]
o.write(filepath)
Something I would check out would be the object key you are trying to use. There's not much documentation on what are the restrictions are (see this and this) but the one shown in error message looks suspicious to me.
Try including the the path in the file key:
s3.buckets[bucket_name].objects[key].write(:file => "photos/#{avatar_big_path}")

Resources