HowlerJS: Audio skipping back on itself on Safari and iOS devices - laravel

I am developing a web player of sorts, I'm using the php framework Laravel to handle the data of the playlist. I create an array of the playlist with all the necessary information. With this array I make a howl instance of the playlist object when it needs to be played.
This works fluidly on Firefox & Chrome, both on desktop as on mobile. However I'm encountering issues when testing on Safari or iOS browsers.
What happens: The audio plays normally however at around 1-2 minutes into the song it loops back on itself to about 20-45secs ago. This creates a really annoying song where it's just repeating the same part of the song until it ends. Which it does. Because despite this looping back the app still continues ticking up the seconds of the song. (sound.seek() keeps ticking up.)
Looking at the network tab I've noticed something odd, whereas the other browsers only fetch the audio source once, Safari does this multiple times. This is about the only tangible change I've noticed.
Since I don't have 10 rep image goes here: https://imgur.com/Y48J52g
What the oddest part is that a locally hosted version doesn't have issues either. So is this a webserver issue? Browser? I'm at a loss.
The onloaderror and onplayerror events also don't fire either, so no issues there as far as I know.
Instancing the howl:
sound = data.howl = new Howl({
src: ['./get-audio' + data.file],
html5: true,
//After this I instance all onX functions (onplay, onend, etc)
...
sound.play()
Then whenever I need the next song I unload this howl instance and create the next one.
Most of my code is adjusted from the HowlerJS example of their 'player' in case you'd like to delve deeper in the code itself.
How the audio gets served:
public function getAudio($map, $name)
{
$fileName = $map.'/'.$name;
$file = Storage::disk('local')->get($fileName);
$filesize = Storage::disk('local')->size($fileName);
$size = $filesize;
$length = $size;
$start = 0;
$end = $size - 1;
return response($file)
->withHeaders([
'Accept-Ranges' => "bytes",
'Accept-Encoding' => "gzip, deflate",
'Pragma' => 'public',
'Expires' => '0',
'Cache-Control' => 'must-revalidate',
'Content-Transfer-Encoding' => 'binary',
'Content-Disposition' => ' inline; filename='.$name,
'Content-Length' => $filesize,
'Content-Type' => "audio/mpeg",
'Connection' => "Keep-Alive",
'Content-Range' => 'bytes 0-'.$end .'/'.$size,
'X-Pad' => 'avoid browser bug',
'Etag' => $name,
]);
}
So I'm not sure why Safari/iOS has an issue with the hosted version whilst locally it does work.
This is my first question on this site, so if you'd like some more information let me know.

I found out the issue.
Namely Safari thought I was serving an audio stream rather than just an mp3 file, causing it to continuously send requests. I solved this by serving my audio like this:
$path = storage_path().DIRECTORY_SEPARATOR."app".DIRECTORY_SEPARATOR."songs".DIRECTORY_SEPARATOR.$name;
$response = new BinaryFileResponse($path);
BinaryFileResponse::trustXSendfileTypeHeader();
return $response;

Related

Understanding JMeter results on simple Laravel application

I am struggling to make sense of the load test results from Jmeter.
I want to understand how much load a plain vanilla Laravel application can handle. I setup simple endpoints on plain Laravel 8 and tried on different AWS EC2 instances (t3.medium, t3.xlarge, load balanced setup etc).
The following routes were used were we forcefully increase the response time:
Route::get('/r0', function (Request $request) {
$data = [
'name' => 'test_name',
'image' => 'imageURL',
'translations' => [
'en' => 'test_english',
'te' => 'పంచాయతి సెక్రటరి',
],
];
return
response()->json($data, 200, ['Content-Type' => 'application/json;charset=UTF-8', 'Charset' => 'utf-8'], JSON_UNESCAPED_UNICODE);
});
Route::get('/r1', function (Request $request) {
sleep(2);
$data = [
'name' => 'test_name',
'image' => 'imageURL',
'translations' => [
'en' => 'test_english',
'te' => 'పంచాయతి సెక్రటరి',
],
];
return
response()->json($data, 200, ['Content-Type' => 'application/json;charset=UTF-8', 'Charset' => 'utf-8'], JSON_UNESCAPED_UNICODE);
});
Route::get('/r2', function (Request $request) {
sleep(10);
$data = [
'name' => 'test_name',
'image' => 'imageURL',
'translations' => [
'en' => 'test_english',
'te' => 'పంచాయతి సెక్రటరి',
],
];
return
response()->json($data, 200, ['Content-Type' => 'application/json;charset=UTF-8', 'Charset' => 'utf-8'], JSON_UNESCAPED_UNICODE);
});
I recorded the throughput and deviation as obtained from the “Graph results” and other details as obtained from the “Aggregate Report” on Jmeter for different end points.
Here’s a sample result for r1 on one of the setups:
From the above image/table, here are some of the questions I want answers for:
What is the load that server can handle for this end point? Is it the point where deviation is higher than throughput (as said in some of the blog posts on the internet) or when the error % is non zero?
How does one define the max load or capacity the server can handle? Can it be an absolute number like 200 users at any point?
For some end points, the server stops completely responding after a certain load. I had to restart the server before I could do further testing. Why does that happen?
Your table doesn't tell the full story regarding how many threads were active and what was the relationship between the number of threads, throughput, response time, errors, etc. I would rather suggest generating HTML Reporting Dashboard, it's way more informative than the aggregated numbers
Normally I would look for the saturation point - the point of maximum system performance, like:
start with 1 virtual user
gradually increase the load observing i.e. Transactions per Second chart. On well behaved system the throughput (number of transactions per second) should increase by the same factor as the number of virtual users, i.e. you increase the load twice - throughput should be twice higher. response times should be more or less equal
at some point you will see that throughput decreases and response time increases. if you look at Active Threads Over Time chart right before this moment - you will see how many virtual users were online at this stage - this is the number you're looking for
you can continue increasing the load to see when the errors start occurring or application terminates
I think point one provides the answer
There are many possible reasons, the most common/obvious are:
the server lacks essential resources like CPU, RAM, Network, it can be checked using JMeter PerfMon Plugin
the server is not properly configured for high loads, see i.e. 12 Tips for Laravel Performance Optimization in 2020 guide for example tuning tweaks
check your application and operating system logs, it might be the case it has been terminated due to consuming previously mentioned resources for example by OOM Killer

TYPO3 Extbase - Download file is cut off to 40 KB if user is not logged in to the front end

I created a duplicate of a download extension from my colleague which is basically an extension which just provides files to download in the back end.
Problem:
If i try to download a file while the extension is only accessible after login to the back end, then it works perfectly fine
however if I open a private browser window where I am not logged in to the back end, then it always cuts off the file and only download the first 40 KB ... even though it is normally 10 MB. Why is the file cut off?
I can download small files ( < 40KB ) perfectly without them getting cutted off.
NOTE:
Before I edited the extension, the download worked perfectly, even if not logged in to the back end! And the download was triggered the same way
Currently I am comparing the code, but from the logic it looks ok, since I did not changed much (added a new model, renamed the extension and some other stuff)
Does someone have a clue what can lead to this problem?
This is the relevant part in my download controller where I first get the public url of the file by passing the fid of the file and then trigger the download by sending headers.
...
if ($this->request->hasArgument('fid')) {
$this->fid = $this->request->getArgument('fid');
}
if ($this->request->hasArgument('cid')) {
$this->cid = $this->request->getArgument('cid');
}
$fileobj = $this->fileRepository->findByUid($this->fid);
if ($fileobj->getFile() !== null) {
$downloadFilePath = $fileobj->getFile()->getOriginalResource()->getPublicUrl();
if (file_exists($downloadFilePath)) {
$fileCounter = (int)$fileobj->getCounter();
$fileobj->setCounter(++$fileCounter);
$oldChecksum = $fileobj->getChecksume();
$groesse = filesize($downloadFilePath);
if (isset($oldChecksum)) {
$checksum = sha1_file($downloadFilePath);
$fileobj->setChecksume($checksum);
}
// update fileobj
$this->fileRepository->update($fileobj);
// Unset fileobj before persists, otherwise there will be also changes
$this->persistenceManager->persistAll();
// If file exists, force download
$fileName = basename($downloadFilePath);
$this->response->setHeader('Content-Type', "application/force-download", TRUE);
$this->response->setHeader('Content-Disposition', 'attachment; filename=' . $fileName, TRUE);
$this->response->setHeader('Content-Length', $groesse, TRUE);
#readfile($downloadFilePath);
$this->response->sendHeaders();
return true; //i can also delete this line, since it is never reached.
} else {
//send emails to everyone who is entered in the address list in the extension configuration.
$this->sendEmails('missing_file', $fileobj);
$this->redirect(
'list',
'Category',
NULL,
array(
'missing' => array(
'fileId' => $this->fid,
'category' => $this->cid
)
)
);
}
}
The 40 KB file does not contain anything that shouldn't be there, it is just cut off. I tested it by writing alot of numbers in a file line by line and download it, result: only a couple thousand numbers are in the file instead of all numbers.
I tried it with both, files stored at a FTP Server and files stored in user_upload, same result.
Here you can see the 40 KB file:
http://pasteall.org/459911
Snippet (in case if the link is down):
<ul>
<li>0</li>
<li>1</li>
<li>2</li>
<li>3</li>
<li>4</li>
<li>5</li>
<li>6</li>
<li>7</li>
<li>8</li>
<li>9</li>
//Cut because stackoverflow does not allow me to post such big texts
...
<li>3183</li>
<li>3184</li>
<li>3185</li>
<li>3186</li>
<li
You can see that it stops downloading the rest, the question is: why?
UPDATE:
I changed it to this:
// If file exists, force download
$fileName = basename($downloadFilePath);
$this->response->setHeader('Content-Type', "application/force-download", TRUE);
$this->response->setHeader('Content-Disposition', 'attachment; filename=' . $fileName, TRUE);
$this->response->setHeader('Content-Length', $groesse, TRUE);
ob_start();
ob_flush();
flush();
$content = file_get_contents($downloadFilePath);
$this->response->setContent($content);
$this->response->sendHeaders();
return true; //i can also delete this line, since it is never reached.
Now the file is downloaded completly, but the file is now wrapped inside the html from the template. it gets rendered inside the fluid variable mainContent.
Like this:
...
<!--TYPO3SEARCH_begin-->
<div class="clearfix col-sm-{f:if(condition:'{data.backend_layout} == 4',then:'12',else:'9')} col-md-{f:if(condition:'{data.backend_layout} == 4',then:'9',else:'6')} col-lg-{f:if(condition:'{data.backend_layout} == 4',then:'10',else:'8')} mainContent">
<f:format.raw>{mainContent}</f:format.raw>
</div>
<!--TYPO3SEARCH_end-->
...
It gets weirder and weirder...
I finally solved the problem. I just had to execute exit or die after sending the headers:
#readfile($downloadFilePath);
$this->response->sendHeaders();
exit;
NOTE: If you exit your code with exit or die then typo3 session set with e.g. $GLOBALS['TSFE']->fe_user->setKey("ses", "token", DownloadUtility::getToken(32)); won't work anymore if not logged in to the backend! Use $GLOBALS['TSFE']->fe_user->setAndSaveSessionData("token", DownloadUtility::getToken(32)); in this case if no log in should be required.
Now it works even if not logged in to the front end.
But I still don't know why the download worked without being cut off while being logged in to the backend, even though the exit statement was missing. Thats extremly weird and we have no explanation.

How can I change chunk_size for Resumable Upload?

I need upload some big files(about 1Gb) into google drive.
I using google-api-client(ruby) version 0.5.0:
media = Google::APIClient::UploadIO.new(file_name, mimeType, original_name)
result = client.execute!(
:api_method => client.service.files.insert,
:body_object => file,
:media => media,
:parameters => {
'uploadType' => 'resumable',
'alt' => 'json'})
I expected that my client split big file on parts and upload these parts on drive.
But I see in logs, that client sending only ONE BIG chunk to drive.
Here is small log example:
Content-Length: "132447559"
Content-Range: "bytes 0-132447558/132447559"
How can I upload big files by chunks with google-api-client?
The intended usage is to try and upload the file in a single chunk. Overall, it's more efficient/faster that way. But there are cases where chunking is preferable, so if you need to chunk the upload for whatever reason, just set the chunk_size property:
media = Google::APIClient::UploadIO.new(file_name, mimeType, original_name)
media.chunk_size = 1000000 # 1mb chunks
result = client.execute!(....)
I'm using the API version 0.7.1, even though I know we're supposed to be using version 0.9 now, because the older version matches the Ruby examples on Google's documentation.
I had to do resort to uploading in chunks because I was getting errors in httpclient library complaining about the file size being too large to convert to integer!
Unfortunately, using #stevebazyl did not work for me as it only uploads the first chunk and then throws a TransmissionError. This seems to be in the google-api-ruby-client code, specifically, Google::APIClient class in the execute! method. It doesn't seem to be handling an HTTP status of 308, which is what a resumable upload returns when it needs the next chunk. I did this to the code:
when 200...300, 308
result
(See api_client.rb)
And use the #send_all method in the ResumableUpload class just like the sample code in the docs and it worked for me. So in addition to #stevebazyl code, I have:
media = Google::APIClient::UploadIO.new(opts[:file], 'video/*')
media.chunk_size = 499200000
videos_insert_response = client.execute!(
:api_method => youtube.videos.insert,
:body_object => body,
:media => media,
:parameters => {
:uploadType => 'resumable',
:part => body.keys.join(',')
}
)
videos_insert_response.resumable_upload.send_all(client)

Zend Framework - session id regenerated, can't stay logged in [duplicate]

This question already has an answer here:
Duplicate DB sessions created upon Zend_Auth login
(1 answer)
Closed 2 years ago.
I'm trying to store sessions in a database using Zend Sessions however for some reason my sessions die out. Im not sure if there's some code being executed which does this or whether its something else.
I've noticed that the session ID seems to be regenerated after a breif time after having logged in.
This is even despite having added the following line in my htaccess file:
php_value session.auto_start 0
The end result is that I'm logged out every minute I'm logged in.
Heres my code in my bootstrap file
$config = array(
'name' => 'session',
'primary' => 'id',
'modifiedColumn' => 'modified',
'dataColumn' => 'data',
'lifetimeColumn' => 'lifetime'
);
$saveHandler = new Zend_Session_SaveHandler_DbTable($config);
Zend_Session::rememberMe($seconds = (60 * 60 * 24 * 30));
$saveHandler->setLifetime($seconds)->setOverrideLifetime(true);
Zend_Session::setSaveHandler($saveHandler);
//start your session!
Zend_Session::start();
I'm not using any other session related function except perhaps for Zend_Auth when logging in.
Infact rememberme calls the regenerateID function of the Session class - the end result is that I'm constantly logged out every few minutes now.
I think that you might be having this problem because you're calling rememberMe BEFORE starting the session.
You have to start the session first otherwise rememberMe won't do anything since it needs a session to set the rememberMe time on.
rememberMe calls the regenerateId function and the regeneration of the Id is what really needs the session to exist.
Place the rememberMe call after the session start then see how that works for you.
If that isn't it then I don't know what it could be since my code looks similar to yours.
Have you tried something like this?
protected function _initSession() {
$config = array(
'name' => 'session',
'primary' => 'id',
'modifiedColumn' => 'modified',
'dataColumn' => 'data',
'lifetimeColumn' => 'lifetime',
'lifetime' => 60*60*24*30,
);
Zend_Session::setSaveHandler(new F_Session_SaveHandler_DbTable($config));
}
This way the lifetime isn't set after initialising the database sessions, but is directly included in the initialisation options - it works for me, I see no reason why this should fail in your case :).
I think you need to look once into following values after bootstrap code
session.gc_maxlifetime
session.cookie_lifetime
If You configure session resources in *.ini config file, check resources.session.cookie_domain parameter.
I spend 3 hours when I remembered about it.

Multipart File Upload in Ruby

I simply want to upload an image to a server with POST. As simple as this task sounds, there seems to be no simple solution in Ruby.
In my application I am using WWW::Mechanize for most things so I wanted to use it for this too, and had a source like this:
f = File.new(filename, File::RDWR)
reply = agent.post(
'http://rest-test.heroku.com',
{
:pict => f,
:function => 'picture2',
:username => #username,
:password => #password,
:pict_to => 0,
:pict_type => 0
}
)
f.close
This results in a totally garbage-ready file on the server that looks scrambled all over:
alt text http://imagehub.org/f/1tk8/garbage.png
My next step was to downgrade WWW::Mechanize to version 0.8.5. This worked until I tried to run it, which failed with an error like "Module not found in hpricot_scan.so". Using the Dependency Walker tool I could find out that hpricot_scan.so needed msvcrt-ruby18.dll. Yet after I put that .dll into my Ruby/bin-folder it gave me an empty error box from where on I couldn't debug very much further. So the problem here is that Mechanize 0.8.5 has a dependency on Hpricot instead of Nokogiri (which works flawlessly).
The next idea was to use a different gem, so I tried using Net::HTTP. After short research I could find out that there is no native support for multipart forms in Net::HTTP and instead you have to build a class that encodes etc. for you. The most helpful I could find was the Multipart-class by Stanislav Vitvitskiy. This class looked good so far, but it does not do what I need, because I don't want to post only files, I also want to post normal data, and that is not possible with his class.
My last attempt was to use RestClient. This looked promising, as there have been examples on how to upload files. Yet I can't get it to post the form as multipart.
f = File.new(filename, File::RDWR)
reply = RestClient.post(
'http://rest-test.heroku.com',
:pict => f,
:function => 'picture2',
:username => #username,
:password => #password,
:pict_to => 0,
:pict_type => 0
)
f.close
I am using http://rest-test.heroku.com which sends back the request to debug if it is sent correctly, and I always get this back:
POST http://rest-test.heroku.com/ with a 101 byte payload,
content type application/x-www-form-urlencoded
{
"pict" => "#<File:0x30d30c4>",
"username" => "s1kx",
"pict_to" => "0",
"function" => "picture2",
"pict_type" => "0",
"password" => "password"
}
This clearly shows that it does not use multipart/form-data as content-type but the standard application/x-www-form-urlencoded, although it definitely sees that pict is a file.
How can I upload a file in Ruby to a multipart form without implementing the whole encoding and data aligning myself?
Long problem, short answer: I was missing the binary mode for reading the image under Windows.
f = File.new(filename, File::RDWR)
had to be
f = File.new(filename, "rb")
Another method is to use Bash and Curl. I used this method when I wanted to test multiple file uploads.
bash_command = 'curl -v -F "file=#texas.png,texas_reversed.png"
http://localhost:9292/fog_upload/upload'
command_result = `#{bash_command}` # the backticks are important <br/>
puts command_result

Resources