I have required to use the latest headless chrome with puppeteer on AWS Lambda and for that, I have referred: https://github.com/adieuadieu/serverless-chrome/blob/master/docs/chrome.md.
I already tried all the following approch to create headless chrome using given instruction but won't work and will get the same error as below.
Locally (build headless Chromium locally is with Docker)
With AWS EC2 (build Chromium using an EC2 Spot Instance (spot-block))
AmazonLinux on EC2 (build Chromium without Docker using just the build.sh script)
ERROR:
ninja: Entering directory `out/Headless'
[1/26797] STAMP obj/build/win/default_exe_manifest.stamp
[2/26797] ACTION //build/util:webkit_version(//build/toolchain/linux:clang_x64)
[3/26797] STAMP obj/base/numerics/base_numerics.stamp
[4/26797] ACTION //base:build_date(//build/toolchain/linux:clang_x64)
[5/26797] STAMP obj/build/buildflag_header_h.stamp
[6/26797] STAMP obj/base/util/type_safety/type_safety.stamp
[7/26797] CXX obj/base/base_static/base_switches.o
FAILED: obj/base/base_static/base_switches.o
../../third_party/llvm-build/Release+Asserts/bin/clang++ -MMD -MF obj/base/base_static/base_switches.o.d -DUSE_AURA=1 -DUSE_NSS_CERTS=1 -DUSE_OZONE=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_GNU_SOURCE -DCR_CLANG_REVISION=\"373424-64a362e7-1\" -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D_FORTIFY_SOURCE=2 -D_LIBCPP_ABI_UNSTABLE -D_LIBCPP_DISABLE_VISIBILITY_ANNOTATIONS -D_LIBCXXABI_DISABLE_VISIBILITY_ANNOTATIONS -D_LIBCPP_ENABLE_NODISCARD -DCR_LIBCXX_REVISION=361348 -DCR_SYSROOT_HASH=bcc994cc6e5d4d6f0eec8b44e7f0a65f5a1a7b90 -DNDEBUG -DNVALGRIND -DDYNAMIC_ANNOTATIONS_ENABLED=0 -I../.. -Igen -fno-strict-aliasing --param=ssp-buffer-size=4 -fstack-protector -funwind-tables -fPIC -B../../third_party/binutils/Linux_x64/Release/bin -pthread -fcolor-diagnostics -fmerge-all-constants -fcrash-diagnostics-dir=../../tools/clang/crashreports -Xclang -mllvm -Xclang -instcombine-lower-dbg-declare=0 -fcomplete-member-pointers -m64 -march=x86-64 -Wno-builtin-macro-redefined -D__DATE__= -D__TIME__= -D__TIMESTAMP__= -no-canonical-prefixes -Wall -Werror -Wextra -Wimplicit-fallthrough -Wthread-safety -Wextra-semi -Wno-missing-field-initializers -Wno-unused-parameter -Wno-c++11-narrowing -Wno-unneeded-internal-declaration -Wno-undefined-var-template -Wno-ignored-pragma-optimize -Wno-implicit-int-float-conversion -Wno-c99-designator -Wno-reorder-init-list -Wno-final-dtor-non-final-class -Wno-sizeof-array-div -fno-omit-frame-pointer -g0 -fvisibility=hidden -Xclang -add-plugin -Xclang find-bad-constructs -Xclang -plugin-arg-find-bad-constructs -Xclang check-ipc -Wheader-hygiene -Wstring-conversion -Wtautological-overlap-compare -O2 -fno-ident -fdata-sections -ffunction-sections -std=c++14 -fno-exceptions -fno-rtti -nostdinc++ -isystem../../buildtools/third_party/libc++/trunk/include -isystem../../buildtools/third_party/libc++abi/trunk/include --sysroot=../../build/linux/debian_sid_amd64-sysroot -fvisibility-inlines-hidden -c ../../base/base_switches.cc -o obj/base/base_static/base_switches.o
../../third_party/llvm-build/Release+Asserts/bin/clang++: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by /build/chromium/src/third_party/llvm-build/Release+Asserts/bin/../lib/libstdc++.so.6)
ninja: build stopped: subcommand failed.
The command '/bin/sh -c sh /build.sh' returned a non-zero code: 1
The way of implementation/orchestration was changed in a lambda to use headless chrome with puppeteer compared to earlier.
I have used chrome-aws-lambda to run headless chrome with puppeteer in lambda for Nodejs10.x/12.x. Amazon has already announced EOL (End Of Life) for Nodejs8.x in lambda.
Amazon is using Amazon Linux AMI 2 for Node.js10.x/12.x and Amazon Linux AMI for Node.js8.x. Amazon Linux AMI 2 does not have some of the dependencies which will create problems to run headless chrome with puppeteer in Nodejs10.x/12.x inside lambda and therefore will use the lambda layer to run headless chrome with puppeteer on Node.js10.x/12.x.
chrome-aws-lambda will provide us the latest version of puppeteer and headless chrome.
I have had this problems as well, if is not required for your app to be hosted on AWS. The easiest way, to put your app in the cloud is by using Google Cloud Functions with a decent amount of RAM. Here is a video showing you how: https://www.youtube.com/watch?v=i8THvr03FaY
Related
I'm trying to run some code using Torch (and Roberta language model) on an EC2 instance on AWS.
The compilation seems to fail, does anyone have a pointer to fix?
Confirm that Torch is correctly installed
import torch
a = torch.rand(5,3)
print (a)
Return this: tensor([[0.7494, 0.5213, 0.8622],...
Attempt to load Roberta
roberta = torch.hub.load('pytorch/fairseq', 'roberta.large.mnli')
Using cache found in /home/ubuntu/.cache/torch/hub/pytorch_fairseq_master
/home/ubuntu/.local/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
fatal: not a git repository (or any of the parent directories): .git
running build_ext
/home/ubuntu/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py:352: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
skipping 'fairseq/data/data_utils_fast.cpp' Cython extension (up-to-date)
skipping 'fairseq/data/token_block_utils_fast.cpp' Cython extension (up-to-date)
building 'fairseq.libnat' extension
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/ubuntu/.local/lib/python3.8/site-packages/torch/include -I/home/ubuntu/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/ubuntu/.local/lib/python3.8/site-packages/torch/include/TH -I/home/ubuntu/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/include/python3.8 -c fairseq/clib/libnat/edit_dist.cpp -o build/temp.linux-x86_64-3.8/fairseq/clib/libnat/edit_dist.o -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=libnat -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
In file included from /home/ubuntu/.local/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:149,
from /home/ubuntu/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /home/ubuntu/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /home/ubuntu/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /home/ubuntu/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
from /home/ubuntu/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
from fairseq/clib/libnat/edit_dist.cpp:9:
/home/ubuntu/.local/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:84: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
84 | #pragma omp parallel for if ((end - begin) >= grain_size)
It then ends, after a long while.
x86_64-linux-gnu-gcc: fatal error: Killed signal terminated program cc1plus compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Got it to work by loading the pretrained model locally instead of from the hub.
from fairseq.models.roberta import RobertaModel
roberta = RobertaModel.from_pretrained('roberta.large.mnli', 'model.pt', '/home/ubuntu/deployedapp/roberta.large')
roberta.eval()
Note that I had to go for a XLarge EC2 instance to run this, otherwise process would be killed due to low memory.
This worked for me:
roberta = torch.hub.load('pytorch/fairseq:main', 'roberta.large.mnli')
roberta.eval()
I took the following steps after reading suggestions from http://andhikalegawa.wordpress.com/2012/01/05/installing-mysql-python-on-snow-leopard-using-xampp-mysql/
Downloaded MySQL-python-1.2.4b4 and unzipped
Changed the mysql_config = /Applications/XAMPP/xamppfiles/bin/mysql_config (as I am using XAMPP 1.7.3)
Downloaded mysql-5.1.70-osx10.6-x86 (I could not find 5.1.55 which is the version used in XAMPP) kept the include folder at /Applications/XAMPP/xamppfiles/
I am new to development so, downloaded XCode 4.6.3 with command line tools.
Logically it should have worked I am getting the following error
running build
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.7-intel-2.7/MySQLdb
running build_ext
building '_mysql' extension
creating build/temp.macosx-10.7-intel-2.7
llvm-gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -pipe -Dversion_info=(1,2,4,'beta',4) -D_version_=1.2.4b4 -I/Applications/XAMPP/xamppfiles/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.7-intel-2.7/_mysql.o -mmacosx-version-min=10.4 -arch i386 -arch ppc -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDONT_DECLARE_CXA_PURE_VIRTUAL
In file included from _mysql.c:44:
/Applications/XAMPP/xamppfiles/include/my_config.h:1088:1: warning: "HAVE_WCSCOLL" redefined
In file included from /System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:8,
from _mysql.c:29:
/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pyconfig.h:891:1: warning: this is the location of the previous definition
llvm-gcc-4.2: error trying to exec '/usr/bin/../llvm-gcc-4.2/bin/powerpc-apple-darwin11-llvm-gcc-4.2': execvp: No such file or directory
lipo: can't figure out the architecture type of: /var/folders/6p/8bxdl12d2nq05dmwbmdzttt40000gn/T//cc0v0ehE.out
error: command 'llvm-gcc-4.2' failed with exit status 255
Looks like it's trying to compile universal for PowerPC, which you probably don't have?
error trying to exec '/usr/bin/../llvm-gcc-4.2/bin/powerpc-apple-darwin11-llvm-gcc-4.2'
I would suggest setting the ARCHFLAGS:
shell> rm -Rf build/
shell> ARCHFLAGS="-arch i386" /usr/bin/python setup.py build
Or instead of i386, on 64-bit x86_64. But since you are downloading 32-bit MySQL, might be good to use i386.
Good luck! (Shameless advert: you can always try MySQL Connector/Python)
Okay, so, I switched from ubuntu 12.04 64-bit to 32-bit and installed build-essential.
I then compiled and installed GMP-5.0.5, MPFR-3.1.1, MPC-1.0, ISL-0.10 and CLOOG-0.17.0. I checked out a copy of the main gcc trunk and attempted to build it with the following configure line (from a separate directory):
../svnsrc/configure --prefix=/usr/GCC/svn --enable-__cxa_atexit --with-plugin-ld=/usr/bin/ld.gold --enable-threads=posix --enable-werror --enable-build-with-cxx --with-gmp=/usr/GCC/prereq/svn --with-mpfr=/usr/GCC/prereq/svn --with-mpc=/usr/GCC/prereq/svn --with-isl=/usr/GCC/prereq/svn --with-cloog=/usr/GCC/prereq/svn --enable-languages=c,c++
Configure ran fine and so I ran make && make check. This ran fine for a while, but then it failed with the following error:
/home/matt/GCC/svnbuild/./gcc/xgcc -B/home/matt/GCC/svnbuild/./gcc/ -B/usr/GCC/svn/i686-pc-linux-gnu/bin/ -B/usr/GCC/svn/i686-pc-linux-gnu/lib/ -isystem /usr/GCC/svn/i686-pc-linux-gnu/include -isystem /usr/GCC/svn/i686-pc-linux-gnu/sys-include -g -O2 -O2 -g -O2 -DIN_GCC -W -Wall -Wwrite-strings -Wcast-qual -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -isystem ./include -fpic -mlong-double-80 -g -DIN_LIBGCC2 -fbuilding-libgcc -fno-stack-protector -fpic -mlong-double-80 -I. -I. -I../.././gcc -I../../../svnsrc/libgcc -I../../../svnsrc/libgcc/. -I../../../svnsrc/libgcc/../gcc -I../../../svnsrc/libgcc/../include -I../../../svnsrc/libgcc/config/libbid -DENABLE_DECIMAL_BID_FORMAT -DHAVE_CC_TLS -DUSE_TLS -o _muldi3.o -MT _muldi3.o -MD -MP -MF _muldi3.dep -DL_muldi3 -c ../../../svnsrc/libgcc/libgcc2.c -fvisibility=hidden -DHIDE_EXPORTS
In file included from /usr/include/stdio.h:28:0,
from ../../../svnsrc/libgcc/../gcc/tsystem.h:88,
from ../../../svnsrc/libgcc/libgcc2.c:29:
/usr/include/features.h:324:26: fatal error: bits/predefs.h: No such file or directory
#include <bits/predefs.h>
^
compilation terminated.
make[3]: *** [_muldi3.o] Error 1
make[3]: Leaving directory `/home/matt/GCC/svnbuild/i686-pc-linux-gnu/libgcc
I looked around, but everything I seemed to find was that this error was caused on x86_64, by not installing gcc-multilib, because Ubuntu and Debian use the mutiarch system, separating the libraries. Okay, fine...but I'm using i686, so why would I need the 64-bit libraries? Any help would be appreciated. Thanks.
Try doing a
sudo apt-get install gcc-multilib
I don't think that installing a 32-bit system changes the architecture of your computer, because your computer will always be a 64-bit machine. Installing the 64-bit version of Ubuntu should only give better multi-core performance. Since your computer is still a 64-bit computer, you probably need a C compiler that will compile on 64-bit machines. Hence the gcc-multilib. I think.
the gcc-multilib trick didn't work out for me. Instead, setting this in the build environment did the trick:
C_INCLUDE_PATH=/usr/include/$(gcc -print-multiarch)
As a requirement for chef-client, I am trying to install yajl-ruby on OpenSUSE 12.1. So far, it is returning the following message:
linux:~ # gem install yajl-ruby
Building native extensions. This could take a while...
ERROR: Error installing yajl-ruby:
ERROR: Failed to build gem native extension.
/usr/bin/ruby extconf.rb
creating Makefile
make
gcc -I. -I/usr/lib64/ruby/1.8/x86_64-linux -I/usr/lib64/ruby/1.8/x86_64-linux -I. -fPIC -fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -g -g -fno-strict-aliasing -fPIC -Wall -funroll-loops -c yajl.c
gcc -I. -I/usr/lib64/ruby/1.8/x86_64-linux -I/usr/lib64/ruby/1.8/x86_64-linux -I. -fPIC -fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -g -g -fno-strict-aliasing -fPIC -Wall -funroll-loops -c yajl_alloc.c
gcc -I. -I/usr/lib64/ruby/1.8/x86_64-linux -I/usr/lib64/ruby/1.8/x86_64-linux -I. -fPIC -fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -g -g -fno-strict-aliasing -fPIC -Wall -funroll-loops -c yajl_buf.c
gcc -I. -I/usr/lib64/ruby/1.8/x86_64-linux -I/usr/lib64/ruby/1.8/x86_64-linux -I. -fPIC -fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -g -g -fno-strict-aliasing -fPIC -Wall -funroll-loops -c yajl_encode.c
yajl_encode.c: In function ‘hexToDigit’:
yajl_encode.c:201:1: internal compiler error: Aborted
Please submit a full bug report,
with preprocessed source if appropriate.
See <http://bugs.opensuse.org/> for instructions.
make: *** [yajl_encode.o] Error 1
Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/yajl-ruby-1.1.0 for inspection.
Results logged to /usr/lib64/ruby/gems/1.8/gems/yajl-ruby-1.1.0/ext/yajl/gem_make.out
The appropriate packages are installed:
zypper install ruby ruby-devel ruby-ri ruby-rdoc ruby-shadow gcc gcc-c++ automake autoconf make curl dmidecode
It may be an issue with the compiler or there may be an issue specific on OpenSUSE. So far, I am not sure which path to take.
gcc clearly recommends you to send bug report to OpenSuse with full preprocessed source (-E option instead of "-c" and add a redirect to file). This may be because opensuse gcc might have some modifications. You should check instructions on bugs.opensuse.org and send a bug report to OpenSuse. If the bug is in basic gcc too, opensuse bugzilla people will forward it upstream or will ask you to do this
To avoid an "internal compiler error" without sending bugs you can try to change build options. Usually "internal compiler error" means that something goes wrong in complex process of optimizations, so you can just change this process (order of optimization stages and which are enabled). Easiest is to change level of optimization (e.g. replace -O2 with -O1 or -O0) or add something like -Osize.
I am trying to install Angr tool on Raspberry pi 3, OS Ubuntu mate 16.04. Git link Angr tool
I isolated problem while installing pyvex getting this error, git link Pyvex
running install
running bdist_egg
running build
Building libVEX
cc -Ipub -Ipriv -Wall -Wmissing-prototypes -Wstrict-prototypes -Wshadow -Wpointer-arith -Wbad-function-cast -Wcast-qual -Wcast-align -Wmissing-declarations -Wwrite-strings -Wformat -Wformat-security -std=gnu99 -fstrict-aliasing -fPIC -g -malign-double -o auxprogs/genoffsets auxprogs/genoffsets.c
cc: error: unrecognized command line option ‘-malign-double’
Makefile-gcc:72: recipe for target 'pub/libvex_guest_offsets.h' failed
make: *** [pub/libvex_guest_offsets.h] Error 1
error: Unable to build libVEX.
-malign-double is for X86 architecture as per GCC documentation but i have arm architecture. How do I fix this issue.
During build PyVEX downloads VEX. I think you have to download it yourself, fix Makefile and build. Then return to building PyVEX.
Report the problem to the angr team.