"cacher layer" for google-generated images - image

The question is prompted by comment to an earlier question of mine. I've never heard of a cacher layer.
The suggestion was to cache google-generated images in this cacher-layer thingie. Can someone give the a pointer to the details of such a layer? "Details" = where does it live? how do I access it? and more.
Thanks so much!

I will explain what I meant.
First of all I needed this system because Google Chart API has some requests-daily CAP so I needed something to bypass it.
The engine was pretty simple.
Consider the vanilla solution: in your HTML you have your img' src directly poiniting to google.
<img src="//google.chart.api?params123">
With a cacher you will not point directly to Google but to your cacher engine:
<img src="//yourwebsite/googleImageCacher.php?id=123">
Now your googleImageCacher.php is dead simple:
It checks if the image requested is found in a cache (it could be a file or whatever) if it's not present then it will request it to google save it and echo.
Something like: (pseudocode)
$imageAssociation = array( '123' => '//google.chart.api?params123'
'image2' => '//google.chart.api?otherparma' );
if ( file_exists( 'imageCacheDir/' . $_GET['id'] ) ) {
echo file_get_contents('imageCacheDir/' . $_GET['id']);
} else {
//> Request the image to google
//> Save it in the imageCacheDir
//> Print it.
}
Of course you can simplement some expiration time in your googleImageCacher.php

Related

TYPO3 Extbase - Download file is cut off to 40 KB if user is not logged in to the front end

I created a duplicate of a download extension from my colleague which is basically an extension which just provides files to download in the back end.
Problem:
If i try to download a file while the extension is only accessible after login to the back end, then it works perfectly fine
however if I open a private browser window where I am not logged in to the back end, then it always cuts off the file and only download the first 40 KB ... even though it is normally 10 MB. Why is the file cut off?
I can download small files ( < 40KB ) perfectly without them getting cutted off.
NOTE:
Before I edited the extension, the download worked perfectly, even if not logged in to the back end! And the download was triggered the same way
Currently I am comparing the code, but from the logic it looks ok, since I did not changed much (added a new model, renamed the extension and some other stuff)
Does someone have a clue what can lead to this problem?
This is the relevant part in my download controller where I first get the public url of the file by passing the fid of the file and then trigger the download by sending headers.
...
if ($this->request->hasArgument('fid')) {
$this->fid = $this->request->getArgument('fid');
}
if ($this->request->hasArgument('cid')) {
$this->cid = $this->request->getArgument('cid');
}
$fileobj = $this->fileRepository->findByUid($this->fid);
if ($fileobj->getFile() !== null) {
$downloadFilePath = $fileobj->getFile()->getOriginalResource()->getPublicUrl();
if (file_exists($downloadFilePath)) {
$fileCounter = (int)$fileobj->getCounter();
$fileobj->setCounter(++$fileCounter);
$oldChecksum = $fileobj->getChecksume();
$groesse = filesize($downloadFilePath);
if (isset($oldChecksum)) {
$checksum = sha1_file($downloadFilePath);
$fileobj->setChecksume($checksum);
}
// update fileobj
$this->fileRepository->update($fileobj);
// Unset fileobj before persists, otherwise there will be also changes
$this->persistenceManager->persistAll();
// If file exists, force download
$fileName = basename($downloadFilePath);
$this->response->setHeader('Content-Type', "application/force-download", TRUE);
$this->response->setHeader('Content-Disposition', 'attachment; filename=' . $fileName, TRUE);
$this->response->setHeader('Content-Length', $groesse, TRUE);
#readfile($downloadFilePath);
$this->response->sendHeaders();
return true; //i can also delete this line, since it is never reached.
} else {
//send emails to everyone who is entered in the address list in the extension configuration.
$this->sendEmails('missing_file', $fileobj);
$this->redirect(
'list',
'Category',
NULL,
array(
'missing' => array(
'fileId' => $this->fid,
'category' => $this->cid
)
)
);
}
}
The 40 KB file does not contain anything that shouldn't be there, it is just cut off. I tested it by writing alot of numbers in a file line by line and download it, result: only a couple thousand numbers are in the file instead of all numbers.
I tried it with both, files stored at a FTP Server and files stored in user_upload, same result.
Here you can see the 40 KB file:
http://pasteall.org/459911
Snippet (in case if the link is down):
<ul>
<li>0</li>
<li>1</li>
<li>2</li>
<li>3</li>
<li>4</li>
<li>5</li>
<li>6</li>
<li>7</li>
<li>8</li>
<li>9</li>
//Cut because stackoverflow does not allow me to post such big texts
...
<li>3183</li>
<li>3184</li>
<li>3185</li>
<li>3186</li>
<li
You can see that it stops downloading the rest, the question is: why?
UPDATE:
I changed it to this:
// If file exists, force download
$fileName = basename($downloadFilePath);
$this->response->setHeader('Content-Type', "application/force-download", TRUE);
$this->response->setHeader('Content-Disposition', 'attachment; filename=' . $fileName, TRUE);
$this->response->setHeader('Content-Length', $groesse, TRUE);
ob_start();
ob_flush();
flush();
$content = file_get_contents($downloadFilePath);
$this->response->setContent($content);
$this->response->sendHeaders();
return true; //i can also delete this line, since it is never reached.
Now the file is downloaded completly, but the file is now wrapped inside the html from the template. it gets rendered inside the fluid variable mainContent.
Like this:
...
<!--TYPO3SEARCH_begin-->
<div class="clearfix col-sm-{f:if(condition:'{data.backend_layout} == 4',then:'12',else:'9')} col-md-{f:if(condition:'{data.backend_layout} == 4',then:'9',else:'6')} col-lg-{f:if(condition:'{data.backend_layout} == 4',then:'10',else:'8')} mainContent">
<f:format.raw>{mainContent}</f:format.raw>
</div>
<!--TYPO3SEARCH_end-->
...
It gets weirder and weirder...
I finally solved the problem. I just had to execute exit or die after sending the headers:
#readfile($downloadFilePath);
$this->response->sendHeaders();
exit;
NOTE: If you exit your code with exit or die then typo3 session set with e.g. $GLOBALS['TSFE']->fe_user->setKey("ses", "token", DownloadUtility::getToken(32)); won't work anymore if not logged in to the backend! Use $GLOBALS['TSFE']->fe_user->setAndSaveSessionData("token", DownloadUtility::getToken(32)); in this case if no log in should be required.
Now it works even if not logged in to the front end.
But I still don't know why the download worked without being cut off while being logged in to the backend, even though the exit statement was missing. Thats extremly weird and we have no explanation.

Storage::get( ) using Amazon S3 returns false

Combining both Intervention Image and Amazon S3, I'd like to be able to pull a file from S3 and then use Image to do some cropping. This is what I have so far, why does Storage::get() return false?
$path = 'uploads/pics/123.jpeg';
$exists = Storage::disk('s3')->exists($path); // returns true
$image = Storage::disk('s3')->get($path); // returns false
From the S3 side of things, the bucket permissions are set to 'Everyone', the Storage::getVisibility() returns public... I'm not sure why I can't load the image as if it were a local image.
After digger deeper into the code I found this message
"Error executing "GetObject" on "file"; AWS HTTP error: file_exists(): open_basedir restriction in effect. File(/etc/pki/tls/certs/ca-bundle.crt) is not within the allowed path(s): (paths)"
First it seems that my server don't have this file, but it have! The file is located in another folder.
/etc/ssl/certs/ca-certificates.crt
So, to solve my problem in Ubuntu I have to create this folder /etc/pki/tls/certs and after that, symlink to the correct file:
cd /etc/pki/tls/certs;
sudo ln -s /etc/ssl/certs/ca-certificates.crt ca-bundle.crt;
Edit your php.ini and add /etc/pki/tls/certs/ca-bundle.crt to the open_basedir configuration.
Restart your php server!
For me it solves the problem, hope it helps!
Since Dec 2020, Amazon S3 now provides strong read-after-write consistency in all regions, rendering this answer obsolete. For more details, refer to the Amazon S3 Strong Consistency page.
This shouldn't be an issue anymore.
The old answer below has been kept for reference purposes & for providing a reason for the bounty previously awarded.
From the Amazon S3 documentation:
Amazon S3 provides read-after-write consistency for PUTS of new objects in your S3 bucket in all regions with one caveat. The caveat is that if you make a HEAD or GET request to the key name (to find if the object exists) before creating the object, Amazon S3 provides eventual consistency for read-after-write.
Given the example code where the path is static and the exists call is made prior to the get, I'm conjecturing that you're being hit with eventual consistency. Your get should eventually return. Try:
$backoff = 0;
while (false === ($image = Storage::disk('s3')->get($path))) {
if (5 < $backoff) {
throw new \RuntimeException;
}
sleep(pow(2, $backoff++));
}
If you are using laravel 5 , than apply for this method.
$photo = $attributes['banner_image'];
$s3 = AWS::createClient('s3');
try {
$response = $s3->putObject([
'Bucket' => 'gfpressreleasephotos',
'Key' => str_random(8) . '.' . str_replace(' ', '-', strtolower($photo->getClientOriginalName())),
'Body' => fopen($photo->getRealPath(), 'r'),
'ACL' => 'public-read',
]);
if ($response->get('ObjectURL') != null) {
$photourl = $response->get('ObjectURL');
} else {
$photourl = $response->get('Location');
}
$attributes['banner_image'] = $photourl;
} catch (S3Exception $e) {
return "There was an error uploading the file.\n";
}

Obtaining a Facebook auth token for a command-line (desktop) application

I am working for a charity which is promoting sign language, and they want to post a video to their FB page every day. There's a large (and growing) number of videos, so they want to schedule the uploads programmatically. I don't really mind what programming language I end up doing this in, but I've tried the following and not got very far:
Perl using WWW::Facebook::API (old REST API)
my $res = $client->video->upload(
title => $name,
description => $description,
data => scalar(read_file("videos/split/$name.mp4"))
);
Authentication is OK, and this correctly posts a facebook.video.upload method to https://api-video.facebook.com/restserver.php. Unfortunately, this returns "Method unknown". I presume this is to do with the REST API being deprecated.
Facebook::Graph in Perl or fb_graph gem in Ruby. (OAuth API)
I can't even authenticate. Both of these are geared towards web rather than desktop applications of OAuth, but I think I ought to be able to do:
my $fb = Facebook::Graph->new(
app_id => "xxx",
secret => "yyy",
postback => "https://www.facebook.com/connect/login_success.html"
);
print $fb->authorize->extend_permissions(qw(publish_stream read_stream))->uri_as_string;
Go to that URL in my browser, capture the code parameter returned, and then
my $r = $fb->request_access_token($code);
Unfortunately:
Could not fetch access token: Bad Request at /Library/Perl/5.16/Facebook/Graph/AccessToken/Response.pm line 26
Similarly in Ruby, using fb_graph,
fb_auth = FbGraph::Auth.new(APP_ID, APP_SECRET)
client = fb_auth.client
client.redirect_uri = "https://www.facebook.com/connect/login_success.html"
puts client.authorization_uri(
:scope => [:publish_stream, :read_stream]
)
Gives me a URL which returns a code, but running
client.authorization_code = <code>
FbGraph.debug!
access_token = client.access_token!
returns
{
"error": {
"message": "Missing client_id parameter.",
"type": "OAuthException",
"code": 101
}
}
Update: When I change the access_token! call to access_token!("foobar") to force Rack::OAuth2::Client to put the identifier and secret into the request body, I get the following error instead:
{
"error": {
"message": "The request is invalid because the app is configured as a desktop app",
"type": "OAuthException",
"code": 1
}
}
How am I supposed to authenticate a desktop/command line app to Facebook using OAuth?
So, I finally got it working, without setting up a web server and doing a callback. The trick, counter-intuitively, was to turn off the "Desktop application" setting and not to request offline_access.
FaceBook::Graph's support for posting videos doesn't seem to work at the moment, so I ended up doing it in Ruby.
fb_auth = FbGraph::Auth.new(APP_ID, APP_SECRET)
client = fb_auth.client
client.redirect_uri = "https://www.facebook.com/connect/login_success.html"
if ARGV.length == 0
puts "Go to this URL"
puts client.authorization_uri(:scope => [:publish_stream, :read_stream] )
puts "Then run me again with the code"
exit
end
if ARGV.length == 1
client.authorization_code = ARGV[0]
access_token = client.access_token! :client_auth_body
File.open("authtoken.txt", "w") { |io| io.write(access_token) }
exit
end
file, title, description = ARGV
access_token = File.read("authtoken.txt")
fb_auth.exchange_token! access_token
File.open("authtoken.txt", "w") { |io| io.write(fb_auth.access_token) }
me = FbGraph::Page.new(PAGE_ID, :access_token => access_token)
me.video!(
:source => File.new(file),
:title => title,
:description => description
)
Problem is in your case that for OAuth you'll need some endpoint URL which is publicly reachable over the Internet for Facebook servers, which can be a no-go for normal client PCs, or a desktop application which is capable of WebViews (and I assume, command line isn't).
Facebook states at https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow#login that you can build a desktop client login flow, but only via so-called WebViews. Therefore, you'd need to call the OAuth endpoint like this:
https://www.facebook.com/dialog/oauth?client_id={YOUR_APP_ID}&redirect_uri=https://www.facebook.com/connect/login_success.html&response_type=token&scope={YOUR_PERMISSION_LIST}
You then have to inspect the resulting redirected WebView URL as quoted:
When using a desktop app and logging in, Facebook redirects people to
the redirect_uri mentioned above and places an access token along with
some other metadata (such as token expiry time) in the URI fragment:
https://www.facebook.com/connect/login_success.html#access_token=ACCESS_TOKEN...
Your app needs to detect this redirect and then read the access token out of the URI using the mechanisms provided by the OS and development framework you are using.
If you want to do this in "hacking mode", I'd recommend to do the following.:
As you want to post to a Page, get a Page Access Token and store it locally. This can be done by using the Graph Explorer at the
https://developers.facebook.com/tools/explorer?method=GET&path=me%2Faccounts
endpoint. Remember to give "manage_pages" and "publish_actions" permissions.
Use cURL (http://curl.haxx.se/docs/manpage.html) to POST the videos to the Graph API with the Access Token and the appropriate Page ID you acquired in step 1 like the following:
curl -v -0 --form title={YOUR_TITLE} --form
description={YOUR_DESCRIPTION} --form source=#{YOUR_FULL_FILE_PATH}
https://graph-video.facebook.com/{YOUR_PAGE_ID}/videos?access_token={YOUR_ACCESS_TOKEN}
References:
https://developers.facebook.com/docs/graph-api/reference/page/videos/#publish
https://developers.facebook.com/docs/reference/api/video/
From the facebook video API reference:
An individual Video in the Graph API.
To read a Video, issue an HTTP GET request to /VIDEO_ID with the
user_videos permission. This will return videos that the user has
uploaded or has been tagged in.
Video POST requests should use graph-video.facebook.com.
So you should be posting to graph-video.facebook.com if you are to upload video.
You also need extended permissions from the user or profile you'll be uploading to, in this case you need video_upload this is going to be requested once only, when the user currently logged in is asked for such permission for the app.
And your endpoint should be:
https://graph-video.facebook.com/me/videos
If you always want to post to a specific user than you'll have to change the endpoint part from /me to the User ID or page ID.
Here's a sample (in PHP):
$app_id = "YOUR_APP_ID";
$app_secret = "YOUR_APP_SECRET";
$my_url = "YOUR_POST_LOGIN_URL";
$video_title = "YOUR_VIDEO_TITLE";
$video_desc = "YOUR_VIDEO_DESCRIPTION";
$code = $_REQUEST["code"];
if(empty($code)) {
$dialog_url = "http://www.facebook.com/dialog/oauth?client_id="
. $app_id . "&redirect_uri=" . urlencode($my_url)
. "&scope=publish_stream";
echo("<script>top.location.href='" . $dialog_url . "'</script>");
}
$token_url = "https://graph.facebook.com/oauth/access_token?client_id="
. $app_id . "&redirect_uri=" . urlencode($my_url)
. "&client_secret=" . $app_secret
. "&code=" . $code;
$access_token = file_get_contents($token_url);
$post_url = "https://graph-video.facebook.com/me/videos?"
. "title=" . $video_title. "&description=" . $video_desc
. "&". $access_token;
echo '<form enctype="multipart/form-data" action=" '.$post_url.' "
method="POST">';
echo 'Please choose a file:';
echo '<input name="file" type="file">';
echo '<input type="submit" value="Upload" />';
echo '</form>';
Although I'm concerned about the upload speed if the videos are too big, but I'm guessing your customer has already sorted that out (compress/optimize/short videos etc.)
I've made you a demo here. Go to my website (I own that domain) and try to upload a video. I tried with this one which is a relatively small 4Mb file. Be sure that this script will only try to upload a video, nothing more (to the FB profile you are currently logged in, that is) but, if you are still concerned, copy my snippet, upload it to your own server (with PHP support of course) and create a test app where the site url is that domain and be sure to specify in the $my_url variable your endpoint which is basically the full path to your script receiving responses from facebook:
http://yourdomain.com/testfb.php
If you still want to do it on a desktop app then you have to go to developer.facebook.com on your app settings:
Settings > Advanced
And look for the first option:
And enable that switch so that facebook allows you to POST from a desktop or native app instead of a web server.
Note: I'm not an expert on Ruby, but the above working PHP code should be pretty obvious and easy to port to it.
as far as I recall, what you want isn't really possible without some kind of endpoint that can receive a callback from facebook.
If you can finagle an oauth token, from say the Graph API Explorer, then it becomes pretty trivial to use a gem like koala to upload your video.
here's the salient bit:
#graph = Koala::Facebook::API.new(access_token)
#graph.put_video(path_to_my_video)
I've made you a sample project here: fb-upload-example

Simple MediaWiki extension debugging

I am trying to write my very first MediaWiki extension and need some way to debug it. What is the simplest way to do it? Showing a message, logging into a file etc. would be fine. I just want to slowly progress over the code and see where it breaks and what the content of a variable is.
I've tried (from http://www.mediawiki.org/wiki/Manual:How_to_debug#Useful_debugging_functions)
// ...somewhere in your code
if ( true ) {
wfDebugLog( 'myext', 'Something is not right: ' . print_r( 'asdf', true ) );
}
in extensions/myext/myext.php and added to LocalSettings.php
require_once( 'extensions/myext/myext.php' );
# debugging on
$wgDebugLogGroups = array(
'myext' => 'extensions/myext/myextension.log'
);
but then my Wiki doesn't work at all (error 500). With the above code removed from myext.php everything's fine (with $wgExtensionCredits in myext.php, I can see myext in the Special:Version).
Is it the right thing to do (then what is the mistake) or is there a better/simpler way to start with?
500 means you have a syntax error or wrong configuration somewhere. Have you followed the instructions at Manual:How to debug and turned on PHP logging, so you can at least see what is causing the error? Alternatively, take a look at your Apache server log.
Also, you'll want to turn on debugging before you load your own extension!
Add these to LocalSettings.php for debugging:
error_reporting( -1 );
ini_set( 'display_startup_errors', 1 );
ini_set( 'display_errors', 1 );
$wgShowExceptionDetails=true;
$wgDebugToolbar=true;
$wgShowDebug=true;
$wgDevelopmentWarnings=true;
$wgDebugDumpSql = true;
$wgDebugLogFile = '/tmp/debug.log';
$wgDebugComments = true;
$wgEnableParserCache = false;
$wgCachePages = false;
You can log debug messages with wfDebug();
Learn more at https://www.mediawiki.org/wiki/Manual:Structured_logging/en

Does anyone know what is the 32 character string before the product image filename in Magento?

I ask this question, since I am trying to get the images I have just copied from Domain A to work in Domain B, (which is using the same database).
http://DOMAIN_A/magento/media/catalog/product/cache/1/image/9df78eab33525d08d6e5fb8d27136e95/b/0/b0041-1.jpg
I think knowing what the 32 character string is, which help me find a good explanation why the images are not being found in the front or backend of Magento after reinstall on DOMAIN B.
RE: Magento version 1.4.0.1
Here's the code that creates that filename path, found in Mage_Catalog_Model_Product_Image:
// build new filename (most important params)
$path = array(
Mage::getSingleton('catalog/product_media_config')->getBaseMediaPath(),
'cache',
Mage::app()->getStore()->getId(),
$path[] = $this->getDestinationSubdir()
);
if((!empty($this->_width)) || (!empty($this->_height)))
$path[] = "{$this->_width}x{$this->_height}";
// add misk params as a hash
$miscParams = array(
($this->_keepAspectRatio ? '' : 'non') . 'proportional',
($this->_keepFrame ? '' : 'no') . 'frame',
($this->_keepTransparency ? '' : 'no') . 'transparency',
($this->_constrainOnly ? 'do' : 'not') . 'constrainonly',
$this->_rgbToString($this->_backgroundColor),
'angle' . $this->_angle,
'quality' . $this->_quality
);
// if has watermark add watermark params to hash
if ($this->getWatermarkFile()) {
$miscParams[] = $this->getWatermarkFile();
$miscParams[] = $this->getWatermarkImageOpacity();
$miscParams[] = $this->getWatermarkPosition();
$miscParams[] = $this->getWatermarkWidth();
$miscParams[] = $this->getWatermarkHeigth();
}
$path[] = md5(implode('_', $miscParams));
// append prepared filename
$this->_newFile = implode('/', $path) . $file; // the $file contains heading slash
So, the hash is generated from the configuration info (aspect ratio, etc), as well as the watermark info. This information will not usually change. However, I do see that the path is partially generated from the store_id of the current store, so your trouble may be there.
Is there a reason you can't let Magento use its normal caching procedures for both stores? Since Magento checks the filesystem for the cached image, there shouldn't be a conflict.
Hope that helps!
Thanks,
Joe
Upon contemplation, are you just trying to get the catalog images to work in both domains? The non-cached version of the catalog images are at %magento%/media/catalog/product. Copy the directories from that location and your catalog images should work.
Moving over the cached images isn't going to go far, since they will be deleted next time you flush the Magento cache. So, having moved the images that are in /media/catalog/product, flush the Magento image cache. Make sure that the file permissions are correct for reading. Then, head into Mage_Catalog_Model_Product_Image and take a look at the following code (approx line 270):
if ($file) {
// add these for debugging
Mage::log($baseDir.$file);
Mage::log(file_exists($baseDir.$file));
Mage::log($this->checkMemory($baseDir.$file));
if ((!file_exists($baseDir . $file)) || !$this->_checkMemory($baseDir . $file)) {
$file = null;
}
}
Add a var_dump or Mage::log statement in there (depending on whether you have logging enabled), and verify that the path to the images is correct, and that you have enough memory for the operation. This is the code that will choose the default image for you if no image path exists. If you still can't get it, post the output of those three logging statements and we'll keep trying. :)

Resources