How to download spatie/browsershot generated file into User/Downloads? - laravel-5

In laravel 5.8 app I use https://github.com/spatie/browsershot
and if I save file as
$save_to_file= 'file.pdf';
Browsershot::html(htmlspecialchars_decode($pdf_content))
->showBackground()
->save($save_to_file);
it is downloaded and saved in /public dir of my app at my local OS
If I try to set path to ‘Downloads’ directory of my Kubuntu 18 as
$save_to_file= '/home/currentuser/Downloads/file.pdf';
Browsershot::html(htmlspecialchars_decode($pdf_content))
->showBackground()
->save($save_to_file);
I got error:
Symfony \ Component \ Process \ Exception \ ProcessFailedException
The command "PATH=$PATH:/usr/local/bin NODE_PATH=`npm root -g` node '/mnt/_work_sdb8/wwwroot/lar/votes/vendor/spatie/browsershot/src/../bin/browser.js' '{"url":"file:\/\/\/tmp\/0906513001561868598\/index.html","action":"pdf","options":{"path":"\/home\/serge\/Downloads\/file.pdf","args":[],"viewport":{"width":800,"height":600},"displayHeaderFooter":false,"printBackground":true}}'" failed. Exit Code: 1(General error) Working directory: /mnt/_work_sdb8/wwwroot/lar/votes/public Output: ================ Error Output: ==============
1) If there is a way to download generated file into ‘Downloads’(OS independently) ?
2) I think that I can use php remove function but again how define ‘Downloads’(OS independently) directory ?

I received the exact same error. No idea why I got that. Followed all the steps mentioned in the docs. Tried various resources still could not figure it out.
Finally this worked
$save_to_file= '/var/www/laravel/storage/app/file.pdf';
Browsershot::url('https://www.google.com')
->noSandbox()->format('a4')->save($save_to_file);
The ->noSandbox() was the key. Let me know if this works for you.

Related

Imagick extension installation issue xampp

I am trying to install Imagick extension on windows 10 with PHP version 8.0.3 but getting below error
PHP Warning: PHP Startup: Unable to load dynamic library
'php_imagick.dll' (tried: D:\xampp\php\ext\php_imagick.dll (The
specified module could not be found),
D:\xampp\php\ext\php_php_imagick.dll.dll (The specified module could
not be found)) in Unknown on line 0
Windows : 10 X64
PHP version : 8.0.3
Steps to reproduce:
I have added imagick.dll file in xampp\php\ext directory
Added CORE_RL_.dll and IM_MOD_RL_.dll in xampp\php folder.
Added extension=php_imagick.dll in xampp\php\php.ini file
Restarted xammp
Getting below error on webpage
enter image description here
In the PHP error log below error is logged.
PHP Warning: PHP Startup: Unable to load dynamic library 'php_imagick.dll'
First download this: https://windows.php.net/downloads/pecl/releases/imagick/3.7.0/php_imagick-3.7.0-8.0-ts-vs16-x64.zip
Extract from php_imagick-….zip the php_imagick.dll file, and save it to the ext directory of your PHP installation
Extract from php_imagick-….zip all the other DLL files and save them to the PHP root directory (where you have php.exe)
Add this line to your php.ini file:
extension=php_imagick.dll
Restart the Apache/NGINX Windows service (if applicable)

Xdebug Failed loading C:\php\ext\php_xdebug.dll

Issue:
I am getting this error in my Apache log on start up:
Failed loading C:\php\ext\php_xdebug-2.9.2-7.4-vc15-x86_64.dll
Xdebug Wizard:
I used the xdebug wizard, which resulted in these instructions:
Download php_xdebug-2.9.2-7.4-vc15-x86_64.dll
Move the downloaded file to C:\php\ext
Edit C:\php\php.ini C:\WINDOWS\php.ini and add the line
zend_extension = C:\php\ext\php_xdebug-2.9.2-7.4-vc15-x86_64.dll
Restart the webserver
Things I have tried:
Using these variations in php.ini:
zend_extension="C:\php\ext\php_xdebug-2.9.2-7.4-vc15-x86_64.dll"
zend_extension=php_xdebug-2.9.2-7.4-vc15-x86_64.dll
I ensured I was editing correct php.ini file
I checked permissions on dll.
I am using:
Apache/2.4.41 (Win64) VC 15
php-7.4.3-Win32-vc15-x64
php_xdebug-2.9.2-7.4-vc15-x86_64.dll
You're using a release candidate of PHP 7.4.1 (php-7.4.1RC1-Win32-vc15-x64) - you might want to use the latest PHP 7.4.3.
You sometimes get a better error message if you try using PHP's command line.

laravel-dropbox-driver throws exception when using laravel-backup package with cron

I'm using spatie/laravel-backup package to backup my laravel project.everything works just fine on local.I uploaded my project on a host and set a cronjob to backup my data every midnight but it's not working.I found this exception from log files:
Starting backup...
In Client.php line 51:
Argument 1 passed to Spatie\Dropbox\Client::__construct() must be of the ty
pe string, null given, called in /var/www/html/vendor/benjamincrozat/larave
l-dropbox-driver/src/ServiceProvider.php on line 17
But when I enter php artisan backup:run or any other commands from this package in the host command line ,it works fine.so it just happens when using with cron.
note that I'm using DropBox for storing the backup files.
Why is this happening?
Thanks

Yocto build broken when setting a remote rpm repository with https

I have generated a Yocto image to be used on all my target devices. When that image is running on target devices, it must be able to be updated using a rpm remote repository through https protocol.
To try doing that, I have added a dnf bbappend to my custom layer:
$ cat recipes-devtools/dnf/dnf_%.bbappend
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI += " \
file://yocto-adv-rpm.repo \
"
do_install_append () {
install -d ${D}/etc/yum.repos.d
install -m 0600 ${WORKDIR}/yocto-adv-rpm.repo ${D}/etc/yum.repos.d/yocto-adv-rpm.repo
}
FILES_${PN} += "/etc/yum.repos.d"
This is the content of repository configuration file included by dnf bbappend recipe:
$ cat recipes-devtools/dnf/files/yocto-adv-rpm.repo
[yocto-adv-rpm]
name=Rocko Yocto Repo
baseurl=https://storage.googleapis.com/my_repo/
gpgkey=https://storage.googleapis.com/my_repo/PACKAGEFEED-GPG-KEY-rocko
enabled=1
gpgcheck=1
This repository configuration breaks the build process of the image. When I try to build myimage recipe, I always get this error:
ERROR: myimage-1.0-r0 do_rootfs: [log_check] myimage: found 1 error message in the logfile:
[log_check] Failed to synchronize cache for repo 'yocto-adv-rpm', disabling.
ERROR: myimage-1.0-r0 do_rootfs: Function failed: do_rootfs
ERROR: Logfile of failure stored in: /home/yocto/yocto/build/tmp/work/machine-poky-linux/myimage/1.0-r0/temp/log.do_rootfs.731
ERROR: Task (/home/yocto/yocto/sources/meta-mylayer/recipes-images/myimage.bb:do_rootfs) failed with exit code '1'
However, when I replace the "https" by "http" in "baseurl" variable:
baseurl=http://storage.googleapis.com/my_repo/
Then the myimage recipe is built fine.
The host machine can download files from the https repository using wget:
$ wget https://storage.googleapis.com/my_repo/PACKAGEFEED-GPG-KEY-rocko
Previous commands works fine, so the problem is not related with the host machine, I think it must be something related with google certificates and yocto stuff.
I found some relevant information inside this file:
yocto/build/tmp/work/machine-poky-linux/myimage/1.0-r0/temp/dnf.librepo.log
The relevant part:
15:56:41 lr_download: Downloading started
15:56:41 check_transfer_statuses: Transfer finished: repodata/repomd.xml (Effective url: https://storage.googleapis.com/my_repo/repodata/repomd.xml)
15:56:41 check_finished_transfer_status: Fatal error - Curl code (77): Problem with the SSL CA cert (path? access rights?) for https://storage.googleapis.com/my_repo/repodata/repomd.xml [error setting certificate verify locations:
CAfile: /home/yocto/yocto/build/tmp/work/x86_64-linux/curl-native/7.54.1-r0/recipe-sysroot-native/etc/ssl/certs/ca-certificates.crt
CApath: none]
15:56:41 lr_yum_download_repomd: repomd.xml download was unsuccessful
Can some of you provide any useful advice to try to fix this?
Thank you in advance for your time! :-)
I finally fixed my issue removing completely my dnf bbappend recipe from my custom layer and adding this variable to my distro.conf file:
PACKAGE_FEED_URIS = "https://storage.googleapis.com/my_repo/"
After that, at the end of the build process the image contains a valid /etc/yum.d/oe-remote-repo file and all the necesary stuff to manage it. There is no need to copy "ca-certificates.crt" manually at all.
Also, it's important to execute this command after finishing the build of the image:
$ bitbake package-index
This command generates a "repodata" directory within the package feed needed by the target device once it uses the repo to update packages using dnf client.
I found a temporal hack to fix my issue:
$ cp /etc/ssl/certs/ca-certificates.crt /home/yocto/yocto/build/tmp/work/x86_64-linux/curl-native/7.54.1-r0/recipe-sysroot-native/etc/ssl/certs/
After that, I was finally able to build the image using the "https" repo.
Now I am in the process of fixing this issue in the right way. I'll come back with the final solution.

Hadoop and Hive Homes in CDH4

I'm trying to configure RHive in the CDH4 environment.
When reading a package 'RHive' in R, the error below got returned.
I'm guessing that's due to wrong homes.
If so, what would be the correct ones?
Or if that's not the reason, what's wrong with that?
Any help would be very appreciated.
Thanks.
> Sys.setenv(HIVE_HOME="/etc/hive")
> Sys.setenv(HADOOP_HOME="/etc/hadoop")
> library(RHive)
Loading required package: rJava
Loading required package: Rserve
This is RHive 0.0-7. For overview type '?RHive'.
HIVE_HOME=/etc/hive
[1] "there is no slaves file of HADOOP. so you should pass hosts argument when you call rhive.connect()."
Error : .onLoad failed in loadNamespace() for 'RHive', details:
call: .jnew("org/apache/hadoop/conf/Configuration")
error: java.lang.ClassNotFoundException
In addition: Warning message:
In file(file, "rt") :
cannot open file '/etc/hadoop/conf/slaves': No such file or directory
Error: package/namespace load failed for 'RHive'
Had the problems but solved it. Downside is that I have to keep track of a bunch of sym links
After struggling with install RHive_0.0-7.tar.gz on CDH 4.7.x and getting:
Warning in file(file, "rt") :
cannot open file '/etc/hadoop/conf/slaves': No such file or directory
[1] "there is no slaves file of HADOOP. so you should pass hosts argument when you call rhive.connect()."
In /etc/hadoop/conf
I added a the following sym link ----> ln -s /opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/etc/hadoop/conf.empty/slaves slaves
(why Cloudera CHD 4.7 installs in /opt without creating the proper sym links from /usr/lib is puzzling)
I also defined the followingin /usr/lib64/R/etc/Renviron
## set hive paths
HIVE_HOME='/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hive'
HADOOP_HOME='/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop'
LD_LIBRARY_PATH='/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop'
At a shell prompt I ran R CMD INSTALL RHive_0.0-7.tar.gz
Installation Happiness!!
++++++
Inside R-Studio (server)
>
> library(RHive)
Loading required package: rJava
Loading required package: Rserve
This is RHive 0.0-7. For overview type ‘?RHive’.
HIVE_HOME=/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hive
call rhive.init() because HIVE_HOME is set.
rhive.init()
>
+++++++
You should set the HADOOP_CONF_DIR separately.
Try export $HADOOP_CONF_DIR=/etc/hadoop/conf/conf.pseudo
The conf.pseudo has the slaves file.
Though I'd be curious to see if you can make RHive work with CDH4.

Resources