Newsletter tracking image Outlook - outlook

I have a newsletter system that keeps track of the people who read it. Although this function works only if permission is given to download the images. But this is not my problem at this time.
My problem is that when I open an newsletter in Outlook (2010) and I give permission to download the images, my system doesn't register this view. But when I open the same newsletter in gMail, it works without any problem. Even when I use Outlook to save the e-mail to a HTML file and I open this file, a new view is registered. The page that save's the view and renders an 1x1 image, doesn't return any errors and no errors can be found in the Apache logs.
The strange thing is that it still worked until a week ago. But even if I put backups of the code, it still doesn't work....
The image url is build up with an base64 encodes string, for example:
http://domain.com/tracker/eyJtYWlsaW5nSWQiOiI4MjQiLCJjb250YWN0SWQiOjM3MzA5LCJjaHVuayI6ImIyYmNiNzhkNjAyMmVmNzQ0NmM4ZDA0YzU1ZGZhMTY0In0=/
In this encodes string, I have a JSON string that contains the newsletter id, a contact id and a MD5 string which I use to validate the data.
I run of ideas what to do to fix or debug this issue. Does anyone have a tip or even better, a solution? :) It it possible that Microsoft updated Outlook to prevent it from downloading this kind of images?

Check if you are sending correct MIME
I suggest using extensions in url example: .png .jpg
Try different domain.

This is the code for generating the image:
header('HTTP/1.0 200 Ok');
header("Content-type: image/png");
$trackerImage = imagecreate(1, 1);
$bgColor = imagecolorallocate($trackerImage, 255, 255, 255);
imagepng($trackerImage);
imagedestroy($trackerImage);
This always worked until a hardware crash of the server 2 weeks ago... The hosting company claimes nothing has been changed to the servers configuration.
I already tried adding an extions to the image path, but that didn't make a difference for Outlook.

Try to set the HTML code to display the image as if it's a larger image. Or even better, just display a normal image along.

I just happen to resolve this issue. The cause appeared to be fairly simple, but very difficult to detect.
When saving data about a user, I also requested the user-agent. In the database, I had a varchar (255) field in use for this information. However, the user-agent Outlook proved to be more than 255 characters. So this resulted in an error message from the database so that no image was generated.

Related

MPDF in Laravel can't output (inline) pdf

I am doing below code in Laravel 5.5 with mpdf 8.0
$mpdf = new \Mpdf\Mpdf();
$mpdf->WriteHTML('Hello World');
$mpdf->Output("test","I");
It outputs gibberish/garbage values, seemingly showing pdf file in raw form.
Some findings
If I use $mpdf->Output($reportPath, 'F'); (saving it to file) and the opening that. It opens the file as expected.
If I place die(); after $mpdf->Output("test","I"); it shows the document.
My suspicion is, it has something something to do with Content-type:application/pdf not being set by default but I have also tried using header("Content-type:application/pdf"); before Output but of no use. it is still showing Content-Type: text/html; charset=UTF-8 in response header in Network tab of chrome (also tried Firefox).
Some back-story
It used to work on php7.3 just fine, but I have to update it to php7.4 due to some library and multiple application on a single server scenario.
Also start using a sub-domain for my application instead of placing the directories after the domain.
I'm looking for
A solution that doesn't require me to place die; at the end of output.
Or some clue in on why this has started happening or/and perhaps why I need to place die; after Output.
Any other solution.
The goal is to provide some ref. for people encountering same issues in future, since I have spent hours and haven't anything that specifically address such issue.
Ok, so I found out that I can't just rely on $this->mpdf->Output('test.pdf',"I") to output my result (though it was working previously with the same line) to the browser.
Because for some reason it has started to send Content-Type:text/html value in Content-Type header so I had to change that.
Solution
I did it as below:
return response($this->mpdf->Output('test.pdf',"I"),200)->header('Content-Type','application/pdf');

Google Drive API Console: Error saving Drive UI integration page

I have a webapp in production that interacts with Google Drive through Google Drive API.
I need to change some settings in Drive interaction but I can't save.
When I save the Drive UI integration page, I receive this error:
There's a problem at our end.
Please try again. If the problem persists, please let us know using
the "Send feedback" link below. Thanks!
(spying Network console: there is an Internal Server Error in a POST call)
I tried to send feedback for months: nobody answers and the bug is still there.
I tried also to create another project: I can save the first time but then the bug returns.
How can I do? Has someone the same problem?
Is there a way to receive a reply from Google? Is there some workaround?
Thank you.
i think that problem must be Client ID
before adding Client ID, go to the Credentials -> OAuth 2.0 Client IDs
then select edit your Client ID. after that your production site url add to Authorized JavaScript origins and Authorized redirect URIs.
then enter your Client ID in Drive UI integration page
For myself trying to get the Drive UI configured I noticed a couple of errors (that don't have any specific error messages)
When adding in an Open URL it has to be a valid domain, so for instance I tried to test it out with local host, to no avail. However something like https://devbox.app.com worked, but something like https://localhost:8888 does not. Even though https://localhost is a valid javascript origin in the client_id configuration (at least for the app I am working on, not sure about other apps), localhost doesn't work as an open URL.
When adding in the mimeTypes it needs to be in the format */* and can include custom mimeTypes like application/custom+xml and application/custom-name+json not sure for other custom types that are not in a particular format like xml or json. Also not sure about wildcards.
When adding in file extensions do not add in the '.' just the name of the file extension.
The app icon I found only failed to upload the image when the image wasn't the exact dimensions, I actually ended up editing some icons in photoshop to change the pixel x pixel values as a quick work around during dev.
That worked for me to get it to save and I tested it with a file that had a custom mimeType (application/custom-name+xml specifically) and custom file extension!

$mailer->addAttachment() not sending (or timing out) large files

Currently I am able to send attachments using Joomla
$mailer = JFactory::getMailer();
$config = JFactory::getConfig();
...
$mailer->addAttachment($filepath);
I am able to send attachments of very small sizes (I've tested 1-100KB of different file types INCLUDING .pdf) but when I tried to attach a 1.7mb pdf file. The page stops loading after some seconds then white screen. I tried to check in developer mode and it says ("Failed to load response data.").
Do you guys know how to go about this? Is this a max upload issue or the request just timed-out maybe because of the slow upload?
Thanks in advance.

Dreamweaver cs6 Spry validation

I have a textfield and a spry validation text field.
In the spry properties I changed the max chars value to 50, checked required and on blurr.
When I test the page I get all the error messages at once ("valure required", "exceeded max number of chars")and the textfield is not coloured. Looks good in the preview in dreamweaver.. but when I run itt in the browser it fails.
Any hints?
--FIXED--
I did not copy the spry assets folder to the testing server.. I only had it in my local site folder.
I have had a similar problem with the error messages, it is extremely frustrating. I eventually found a solution via http://cssmenumaker.com/dreamweaver-css-menu-extension - I would definitely recommend them.

How can I scrape an image that doesn't have an extension?

Sometimes I come across an image that I can't scrape so that it can be saved. An example of this is:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487
When I hit the url from Internet Explorer I see the image but when I try to get it from the code below I get the following error message "System.Net.WebException The remote server returned an error: (403) Forbidden" error with GetResponse:
string url = "https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487";
WebRequest request = WebRequest.Create(url);
WebResponse response = request.GetResponse();
Any ideas on how to get this image?
Edit:
I am able to get to save images that do have extensions. For example I can scrape the following image just fine:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12659/image/original.jpg?1326828951
Although HTTP is originally supposed to be stateless, there are a lot of implementations that rely on it being stateless. I could configure my webserver to only accept requests for "http://mydomain.com/sexy_avatar.jpg" if you provide a cookie proving you were logged in. If not, I send you a redirect 303 to "http://mydomain.com/avatar_for_public_use.jpg".
Amazon could be doing the same. Try to load the web page using Chrome, and look at the Network view in developer mode (CTRL+SHIFT+J) to see all headers supplied to the website. Maybe you even need to do a full navigation in the same session before you are allowed to see the image. This is certainly the case in many web applications I have developed :-)
Well, it looks like it's being generated from a script (possibly being retrieved from a database). The server should be sending a file/content type to go along with that... but it doesn't seem to be, which I believe is a violation of standards.
My Linux box knows full well that that's a JPEG image once it's on my hard drive, because it examines file headers rather than relying on extensions. Perhaps there is a tool to do the same in Windows?
Edit: Actually, on further contemplation, it seems odd that you'd get a 403 for that. Perhaps the server is actually blocking you from retrieving the file in that manner.

Resources