Failed building wheel for xmlsec - pip

Trying to install Posthog, pip installation fails while creating wheel for xmlsec with the following error
Building wheels for collected packages: pyyaml, xmlsec
Building wheel for pyyaml (pyproject.toml) ... done
Created wheel for pyyaml: filename=PyYAML-6.0-cp38-cp38-macosx_10_14_arm64.whl size=45338 sha256=fc1069bd2dcdd9d7f47f2d2faba20f111af7eb5dbc1b3fa445b971c643e9f8e4
Stored in directory: /Users/saranshagarwal/Library/Caches/pip/wheels/52/84/66/50912fd7bf1639a31758e40bd4312602e104a8eca1e0da9645
Building wheel for xmlsec (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for xmlsec (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [31 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.14-arm64-cpython-38
creating build/lib.macosx-10.14-arm64-cpython-38/xmlsec
copying src/xmlsec/py.typed -> build/lib.macosx-10.14-arm64-cpython-38/xmlsec
copying src/xmlsec/tree.pyi -> build/lib.macosx-10.14-arm64-cpython-38/xmlsec
copying src/xmlsec/__init__.pyi -> build/lib.macosx-10.14-arm64-cpython-38/xmlsec
copying src/xmlsec/constants.pyi -> build/lib.macosx-10.14-arm64-cpython-38/xmlsec
copying src/xmlsec/template.pyi -> build/lib.macosx-10.14-arm64-cpython-38/xmlsec
running build_ext
building 'xmlsec' extension
creating build/temp.macosx-10.14-arm64-cpython-38
creating build/temp.macosx-10.14-arm64-cpython-38/private
creating build/temp.macosx-10.14-arm64-cpython-38/private/var
creating build/temp.macosx-10.14-arm64-cpython-38/private/var/folders
creating build/temp.macosx-10.14-arm64-cpython-38/private/var/folders/b0
creating build/temp.macosx-10.14-arm64-cpython-38/private/var/folders/b0/3bqprk097hl33cl_byclp6pc0000gn
creating build/temp.macosx-10.14-arm64-cpython-38/private/var/folders/b0/3bqprk097hl33cl_byclp6pc0000gn/T
creating build/temp.macosx-10.14-arm64-cpython-38/private/var/folders/b0/3bqprk097hl33cl_byclp6pc0000gn/T/pip-install-k8uaecy7
creating build/temp.macosx-10.14-arm64-cpython-38/private/var/folders/b0/3bqprk097hl33cl_byclp6pc0000gn/T/pip-install-k8uaecy7/xmlsec_d0d430f1016e47f88c6c0edb21c82e85
creating build/temp.macosx-10.14-arm64-cpython-38/private/var/folders/b0/3bqprk097hl33cl_byclp6pc0000gn/T/pip-install-k8uaecy7/xmlsec_d0d430f1016e47f88c6c0edb21c82e85/src
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/Headers -arch arm64 -arch x86_64 -Werror=implicit-function-declaration -I /opt/homebrew/opt/openssl/include -D__XMLSEC_FUNCTION__=__func__ -DXMLSEC_NO_SIZE_T -DXMLSEC_NO_GOST=1 -DXMLSEC_NO_GOST2012=1 -DXMLSEC_NO_CRYPTO_DYNAMIC_LOADING=1 -DXMLSEC_CRYPTO_OPENSSL=1 -DMODULE_NAME=xmlsec -DMODULE_VERSION=1.3.12 -I/opt/homebrew/Cellar/libxmlsec1/1.2.34_1/include/xmlsec1 -I/opt/homebrew/opt/openssl#1.1/include -I/opt/homebrew/opt/openssl#1.1/include/openssl -I/private/var/folders/b0/3bqprk097hl33cl_byclp6pc0000gn/T/pip-build-env-vytaqq0a/normal/lib/python3.8/site-packages/lxml/includes -I/private/var/folders/b0/3bqprk097hl33cl_byclp6pc0000gn/T/pip-build-env-vytaqq0a/normal/lib/python3.8/site-packages/lxml -I/Users/saranshagarwal/Workstation/posthog/env/include -I/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/Headers -c /private/var/folders/b0/3bqprk097hl33cl_byclp6pc0000gn/T/pip-install-k8uaecy7/xmlsec_d0d430f1016e47f88c6c0edb21c82e85/src/constants.c -o build/temp.macosx-10.14-arm64-cpython-38/private/var/folders/b0/3bqprk097hl33cl_byclp6pc0000gn/T/pip-install-k8uaecy7/xmlsec_d0d430f1016e47f88c6c0edb21c82e85/src/constants.o -g -std=c99 -fPIC -fno-strict-aliasing -Wno-error=declaration-after-statement -Werror=implicit-function-declaration -Os
In file included from /private/var/folders/b0/3bqprk097hl33cl_byclp6pc0000gn/T/pip-install-k8uaecy7/xmlsec_d0d430f1016e47f88c6c0edb21c82e85/src/constants.c:11:
In file included from /private/var/folders/b0/3bqprk097hl33cl_byclp6pc0000gn/T/pip-install-k8uaecy7/xmlsec_d0d430f1016e47f88c6c0edb21c82e85/src/constants.h:13:
/private/var/folders/b0/3bqprk097hl33cl_byclp6pc0000gn/T/pip-install-k8uaecy7/xmlsec_d0d430f1016e47f88c6c0edb21c82e85/src/platform.h:16:10: fatal error: 'Python.h' file not found
#include <Python.h>
^~~~~~~~~~
1 error generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for xmlsec
Successfully built pyyaml
Failed to build xmlsec
ERROR: Could not build wheels for xmlsec, which is required to install pyproject.toml-based projects
Here is my requirement files:
#
# This file is autogenerated by pip-compile with python 3.8
# To update, run:
#
# pip-compile requirements.in
#
aiohttp==3.8.1
# via geoip2
aiosignal==1.2.0
# via aiohttp
amqp==2.6.0
# via
# -r requirements.in
# kombu
asgiref==3.3.2
# via django
async-generator==1.10
# via
# trio
# trio-websocket
async-timeout==4.0.2
# via
# aiohttp
# redis
attrs==21.4.0
# via
# aiohttp
# jsonschema
# outcome
# trio
backoff==1.6.0
# via posthoganalytics
billiard==3.6.3.0
# via celery
boto3==1.21.29
# via -r requirements.in
botocore==1.24.46
# via
# boto3
# s3transfer
celery==4.4.7
# via
# -r requirements.in
# celery-redbeat
celery-redbeat==2.0.0
# via -r requirements.in
certifi==2019.11.28
# via
# requests
# sentry-sdk
# urllib3
cffi==1.14.5
# via cryptography
chardet==3.0.4
# via requests
charset-normalizer==2.1.0
# via aiohttp
clickhouse-driver==0.2.1
# via
# -r requirements.in
# clickhouse-pool
clickhouse-pool==0.5.3
# via -r requirements.in
cryptography==37.0.2
# via
# kafka-helper
# pyopenssl
# social-auth-core
# urllib3
cssselect==1.1.0
# via toronado
cssutils==1.0.2
# via toronado
dataclasses-json==0.5.7
# via -r requirements.in
defusedxml==0.6.0
# via
# -r requirements.in
# python3-openid
# social-auth-core
deprecated==1.2.13
# via redis
dj-database-url==0.5.0
# via -r requirements.in
django==3.2.15
# via
# -r requirements.in
# django-axes
# django-cors-headers
# django-deprecate-fields
# django-extensions
# django-filter
# django-picklefield
# django-redis
# django-rest-hooks
# django-structlog
# djangorestframework
# djangorestframework-dataclasses
# drf-spectacular
django-axes==5.9.0
# via -r requirements.in
django-cors-headers==3.5.0
# via -r requirements.in
django-deprecate-fields==0.1.1
# via -r requirements.in
django-extensions==3.1.2
# via -r requirements.in
django-filter==2.4.0
# via -r requirements.in
django-ipware==3.0.2
# via
# django-axes
# django-structlog
django-loginas==0.3.9
# via -r requirements.in
django-picklefield==3.0.1
# via -r requirements.in
django-prometheus==2.2.0
# via -r requirements.in
django-redis==5.2.0
# via -r requirements.in
django-rest-hooks # git+https://github.com/zapier/django-rest-hooks.git#v1.6.0
# via -r requirements.in
django-statsd==2.5.2
# via -r requirements.in
django-structlog==2.1.3
# via -r requirements.in
djangorestframework==3.12.2
# via
# -r requirements.in
# djangorestframework-csv
# djangorestframework-dataclasses
# drf-exceptions-hog
# drf-extensions
# drf-spectacular
djangorestframework-csv==2.1.1
# via -r requirements.in
djangorestframework-dataclasses==1.1.1
# via -r requirements.in
dnspython==2.2.1
# via -r requirements.in
drf-exceptions-hog==0.2.0
# via -r requirements.in
drf-extensions==0.7.0
# via -r requirements.in
drf-spectacular==0.21.1
# via -r requirements.in
frozenlist==1.3.0
# via
# aiohttp
# aiosignal
future==0.18.2
# via lzstring
geoip2==4.6.0
# via -r requirements.in
google-cloud-sqlcommenter==2.0.0
# via -r requirements.in
gunicorn==20.1.0
# via -r requirements.in
h11==0.13.0
# via wsproto
idna==2.8
# via
# -r requirements.in
# requests
# trio
# urllib3
# yarl
importlib-metadata==1.6.0
# via -r requirements.in
importlib-resources==5.9.0
# via jsonschema
infi-clickhouse-orm # git+https://github.com/PostHog/infi.clickhouse_orm#37722f350f3b449bbcd6564917c436b0d93e796f
# via -r requirements.in
inflection==0.5.1
# via drf-spectacular
iso8601==0.1.12
# via infi-clickhouse-orm
isodate==0.6.1
# via python3-saml
jmespath==1.0.0
# via
# boto3
# botocore
jsonschema==4.4.0
# via drf-spectacular
kafka-helper==0.2
# via -r requirements.in
kafka-python==2.0.2
# via -r requirements.in
kombu==4.6.10
# via
# -r requirements.in
# celery
lxml==4.6.5
# via
# python3-saml
# toronado
# xmlsec
lzstring==1.0.4
# via -r requirements.in
marshmallow==3.15.0
# via
# dataclasses-json
# marshmallow-enum
marshmallow-enum==1.5.1
# via dataclasses-json
maxminddb==2.2.0
# via geoip2
mimesis==5.2.1
# via -r requirements.in
monotonic==1.5
# via posthoganalytics
multidict==6.0.2
# via
# aiohttp
# yarl
mypy-extensions==0.4.3
# via typing-inspect
numpy==1.23.3
# via -r requirements.in
oauthlib==3.1.0
# via
# requests-oauthlib
# social-auth-core
outcome==1.1.0
# via trio
packaging==21.3
# via
# marshmallow
# redis
parso==0.8.1
# via -r requirements.in
pexpect==4.7.0
# via -r requirements.in
pickleshare==0.7.5
# via -r requirements.in
posthoganalytics==2.1.2
# via -r requirements.in
prometheus-client==0.14.1
# via django-prometheus
psycopg2-binary==2.8.6
# via -r requirements.in
ptyprocess==0.6.0
# via pexpect
pycparser==2.20
# via cffi
pyjwt==2.4.0
# via
# -r requirements.in
# social-auth-core
pyopenssl==22.0.0
# via urllib3
pyparsing==3.0.7
# via packaging
pyrsistent==0.18.1
# via jsonschema
pysocks==1.7.1
# via urllib3
python-dateutil==2.8.1
# via
# -r requirements.in
# botocore
# celery-redbeat
# posthoganalytics
python-statsd==2.1.0
# via django-statsd
python3-openid==3.1.0
# via social-auth-core
python3-saml==1.12.0
# via -r requirements.in
pytz==2021.1
# via
# -r requirements.in
# celery
# clickhouse-driver
# django
# infi-clickhouse-orm
# tzlocal
pyyaml==6.0
# via drf-spectacular
redis==4.3.4
# via
# -r requirements.in
# celery-redbeat
# django-redis
requests==2.25.1
# via
# -r requirements.in
# django-rest-hooks
# geoip2
# infi-clickhouse-orm
# posthoganalytics
# requests-oauthlib
# social-auth-core
# webdriver-manager
requests-oauthlib==1.3.0
# via
# -r requirements.in
# social-auth-core
s3transfer==0.5.2
# via boto3
selenium==4.1.5
# via -r requirements.in
semantic-version==2.8.5
# via -r requirements.in
sentry-sdk==1.7.0
# via -r requirements.in
six==1.16.0
# via
# djangorestframework-csv
# isodate
# posthoganalytics
# python-dateutil
# tenacity
slack-sdk==3.17.1
# via -r requirements.in
sniffio==1.2.0
# via trio
social-auth-app-django==5.0.0
# via -r requirements.in
social-auth-core==4.1.0
# via
# -r requirements.in
# social-auth-app-django
sortedcontainers==2.4.0
# via trio
sqlparse==0.4.2
# via
# -r requirements.in
# django
statshog==1.0.6
# via -r requirements.in
structlog==21.2.0
# via django-structlog
tenacity==6.1.0
# via celery-redbeat
toronado==0.1.0
# via -r requirements.in
trio==0.20.0
# via
# selenium
# trio-websocket
trio-websocket==0.9.2
# via selenium
typing-extensions==4.1.1
# via typing-inspect
typing-inspect==0.7.1
# via dataclasses-json
tzlocal==2.1
# via clickhouse-driver
unicodecsv==0.14.1
# via djangorestframework-csv
uritemplate==4.1.1
# via drf-spectacular
urllib3[secure,socks]==1.26.5
# via
# botocore
# geoip2
# requests
# selenium
# sentry-sdk
vine==1.3.0
# via
# amqp
# celery
webdriver-manager==3.5.4
# via -r requirements.in
whitenoise==5.2.0
# via -r requirements.in
wrapt==1.14.1
# via deprecated
wsproto==1.1.0
# via trio-websocket
xmlsec==1.3.12
# via python3-saml
yarl==1.7.2
# via aiohttp
zipp==3.1.0
# via
# importlib-metadata
# importlib-resources
# The following packages are considered to be unsafe in a requirements file:
# setuptools
tried out several answers, nothing i think path is not getting updated after installing the packages from homebrew
tried out several answers, nothing i think path is not getting updated after installing the packages from homebrew
tried out several answers, nothing i think path is not getting updated after installing the packages from homebrew
tried out several answers, nothing i think path is not getting updated after installing the packages from homebrew
tried out several answers, nothing i think path is not getting updated after installing the packages from homebrew
tried out several answers, nothing i think path is not getting updated after installing the packages from homebrew

Looking at the pypi page for xmlsec
https://pypi.org/project/xmlsec/
On MacOs it says you need to install
libxml2 >= 2.9.1
libxmlsec1 >= 1.2.18
To build package, which aren't in your requirements, maybe that's why the wheel is failing ?

Related

start tmux sessions in googlecloud startup-script

I added a startup-script entry in the metadatas of my google cloud instance as suggested in the doc here
the question Google Compute Engine - Start tmux with startup-script didn't work for me.
my startup-script code is:
#! /bin/bash
tmux start-server
tmux new -d -s data_vis_pfs 'pachctl mount /var/data_vis/pfs'
tmux new -d -s data_vis_server 'cd /var/data_vis/server/ && python ./index.py'
tmux new -d -s data_vis_client 'cd /var/data_vis/client/ && npx serve -l 3001 -s build'
I also tried : 
#! /bin/bash
tmux start \; \
new -d -s data_vis_pfs 'pachctl mount /var/data_vis/pfs' \; \
new -d -s data_vis_server 'cd /var/data_vis/server/ && python ./index.py' \; \
new -d -s data_vis_client 'cd /var/data_vis/client/ && npx serve -l 3001 -s build'
When I do sudo journalctl -u google-startup-scripts.service; after the machine boots up I get:
Aug 24 12:20:40 work1-cpu systemd[1]: Starting Google Compute Engine Startup Scripts...
Aug 24 12:20:42 work1-cpu GCEMetadataScripts[506]: 2021/08/24 12:20:42 GCEMetadataScripts: Starting startup scripts (version 20201214.00).
Aug 24 12:20:42 work1-cpu GCEMetadataScripts[506]: 2021/08/24 12:20:42 GCEMetadataScripts: Found startup-script in metadata.
Aug 24 12:20:42 work1-cpu GCEMetadataScripts[506]: 2021/08/24 12:20:42 GCEMetadataScripts: startup-script exit status 0
Aug 24 12:20:42 work1-cpu GCEMetadataScripts[506]: 2021/08/24 12:20:42 GCEMetadataScripts: Finished running startup scripts.
Aug 24 12:20:42 work1-cpu systemd[1]: google-startup-scripts.service: Succeeded.
Aug 24 12:20:42 work1-cpu systemd[1]: Started Google Compute Engine Startup Scripts.
so it's supposed to be a win (status 0)
But my code doesn't seems to be active (the python server is not launched, the front and the pachctl mount neither). A top command doesn't show them too.
I know I am not supposed to see the sessions as it is ran by root and I could fix that through Socket but I don't care for the moment: I just need the code to be launched.
Do someone have a clue about what I am missing?
There were various errors. Thanks to Wojtek_B for his detailed answer which led me to the way.
1 - First problem : dependencies
I had to install at the start of the script all the needed dependencies, in my case :
1.1 - system :
sudo apt update
sudo apt install -y tmux pachctl nodejs npm python3-setuptools python3.7-dev
1.2 - python :
python3 -m pip install {all packages here....}
the list of packages to install was retrieved thanks to a pip3 list when logged
note the python3 -m pip instead of simply pip or pip3. This is used if there is a python 2.x in the machine gcloud use 2.x by default and thus this install doesn't work (event pip3 install). Anyway this python3 -m pip install ... works I would advise that.
1.3 - node
npm install -g npx
2 - Tmux :
tmux start \; \
new -d -s sleep 'sleep 1'\; \
new -d -s data_vis_pfs 'export KUBECONFIG=/var/data_vis/.kub/config && gcloud auth activate-service-account pfsmounter#{PROJECT}.iam.gserviceaccount.com --key-file=/var/data_vis/sa_cred.json &>> /tmp/pfs_log.txt && gcloud container clusters get-credentials {CLUSTER_NAME} --zone={ZONE_NAME} &>> /tmp/pfs_log.txt && kubectl config current-context &>> /tmp/pfs_log.txt && pachctl list repo && pachctl mount /var/data_vis/pfs --verbose &>> /tmp/pfs_log.txt' \; \
new -d -s data_vis_server 'sleep 1 && ls /var/data_vis/pfs/ &>> /tmp/debug1.txt && cd /var/data_vis/server/ && python3 ./index.py &>> /tmp/server_log.txt' \; \
new -d -s data_vis_client 'cd /var/data_vis/client/ && npx serve -l 3001 -s build &>> /tmp/client_log.txt'
first session sleep : in my case not usefull but seems to be good practice in order for the script not to close too early
second session pachyderm :
I had to create a Service Agent (In the Cloud Console, go to the service accounts page or type gcloud service account if dont trust this link)
with the following authorizations :(sorry if not exact label I had to translate from my language)
Reader of cluster Kubernetes Engine
Service agent of Kubernetes Engine
Reader Kubernetes Engine
note the {CLUSTER_NAME} {ZONE_NAME} (find them through gcloud container clusters list) and {PROJECT} to replace by your own values. I had to manualy do export KUBECONFIG=/var/data_vis/.kub/config otherwise it would fail in the tmux session (although was working in main thread)
third session : flask server (python) : nothing special, I made a sleep just in case
fourth session : front application : nothing special
final code :
sudo apt update
sudo apt install -y tmux pachctl nodejs npm python3-setuptools python3.7-dev
python3 -m pip install adal aiohttp ansiwrap anyio appdirs argcomplete argon2-cffi arrow asn1crypto async-generator async-timeout attrs backcall backports.functools-lru-cache bidict binaryornot black bleach blinker blis bokeh boto boto3 botocore brotlipy bz2file cachetools catalogue certifi cffi chardet charset-normalizer click cloudpickle colorama colorcet confuse cookiecutter cryptography cycler cymem dask datashader datashape debugpy decorator defusedxml distributed docker docker-pycreds entrypoints fastai fastcore fastprogress Flask Flask-Cors Flask-SocketIO fsspec gcsfs gitdb GitPython google-api-core google-api-python-client google-auth google-auth-httplib2 google-auth-oauthlib google-cloud-bigquery google-cloud-bigquery-storage google-cloud-bigtable google-cloud-core google-cloud-dataproc google-cloud-datastore google-cloud-firestore google-cloud-kms google-cloud-language google-cloud-logging google-cloud-monitoring google-cloud-pubsub google-cloud-scheduler google-cloud-spanner google-cloud-speech google-cloud-storage google-cloud-tasks google-cloud-translate google-cloud-videointelligence google-cloud-vision google-crc32c google-resumable-media googleapis-common-protos grpc-google-iam-v1 grpcio grpcio-gcp h11 HeapDict holoviews htmlmin httplib2 idna ImageHash imageio importlib-metadata ipykernel ipython ipython-genutils ipython-sql ipywidgets itsdangerous jedi Jinja2 jinja2-time jmespath joblib json5 jsonschema jupyter-client jupyter-core jupyter-http-over-ws jupyterlab jupyterlab-git jupyterlab-pygments jupyterlab-server jupyterlab-widgets kiwisolver kubernetes llvmlite locket loguru Markdown MarkupSafe matplotlib matplotlib-inline missingno mistune msgpack multidict multimethod multipledispatch murmurhash mypy-extensions nbclient nbconvert nbdime nbformat nest-asyncio networkx numba numpy oauthlib olefile packaging pandas pandas-profiling pandocfilters panel papermill param parso partd pathspec pathy patsy pexpect phik pickleshare Pillow pip poyo preshed prettytable prometheus-client prompt-toolkit protobuf psutil ptyprocess pyarrow pyasn1 pyasn1-modules pycosat pycparser pyct pydantic Pygments PyJWT pynndescent pyOpenSSL pyparsing pyrsistent PySocks python-dateutil python-engineio python-pachyderm python-slugify python-socketio pytz pyviz-comms PyWavelets PyYAML pyzmq rawpy regex requests requests-oauthlib requests-unixsocket retrying rope rsa ruamel.yaml ruamel.yaml.clib s3transfer scikit-image scikit-learn scipy seaborn Send2Trash setuptools shellingham simple-websocket simplejson six smart-open smmap sniffio sortedcontainers spacy spacy-legacy SQLAlchemy sqlparse srsly statsmodels tangled-up-in-unicode tblib tenacity terminado testpath text-unidecode textwrap3 thinc threadpoolctl tifffile toml tomli toolz torch torchvision tornado tqdm traitlets typed-ast typeguard typer typing-extensions umap-learn umap-learn[plot] Unidecode uritemplate urllib3 visions wasabi wcwidth webencodings websocket-client Werkzeug wheel whichcraft widgetsnbextension wrapt wsproto xarray yarl zict zipp
#pip3 list &>> /tmp/debug1.txt
npm install -g npx
#nodejs --version &>> /tmp/debug1.txt
tmux start \; \
new -d -s sleep 'sleep 1'\; \
new -d -s data_vis_pfs 'export KUBECONFIG=/var/data_vis/.kub/config && gcloud auth activate-service-account pfsmounter#{PROJECT}.iam.gserviceaccount.com --key-file=/var/data_vis/sa_cred.json &>> /tmp/pfs_log.txt && gcloud container clusters get-credentials {CLUSTER_NAME} --zone={ZONE_NAME} &>> /tmp/pfs_log.txt && kubectl config current-context &>> /tmp/pfs_log.txt && pachctl list repo && pachctl mount /var/data_vis/pfs --verbose &>> /tmp/pfs_log.txt' \; \
new -d -s data_vis_server 'sleep 1 && ls /var/data_vis/pfs/ &>> /tmp/debug1.txt && cd /var/data_vis/server/ && python3 ./index.py &>> /tmp/server_log.txt' \; \
new -d -s data_vis_client 'cd /var/data_vis/client/ && npx serve -l 3001 -s build &>> /tmp/client_log.txt'
First - depending on the image you're running your machine from - it has to have tmux installed. If it's a new machine with Debian 10 you need to put sudo apt install tmux -y at the start of your startup script to install it.
To check if the script ran at the start you can add the touch /tmp/testfile1.txt at the end and when the VM has booted up check if the file exists. That's the easies (and not so reliable way to tell if the script ran).
I'm not familiar with tmux but I've found out that the server service will exit of there are no sessions created, it looks to me like the server exitx before the new sessions are established. You can try using sleep 1 suggested here to solve your issue.
I tried running your script as is but had the same results as you, but I did the debugging I mentioned and everything worked;
I added some "debugging" lines to the script and ran it:
apt update && sudo apt install tmux -y &>> /tmp/debug1.txt
tmux start-server &>> /tmp/debug1.txt && echo "--- Line 1 OK" >>/tmp/debug1.txt
tmux new -d -s data_vis_pfs 'pachctl mount /var/data_vis/pfs' &>> /tmp/debug1.txt && echo "--- Line 2 OK" >>/tmp/debug1.txt
tmux new -d -s data_vis_server 'cd /var/data_vis/server/ && python ./index.py' &>> /tmp/debug1.txt && echo "--- Line 3 OK" >>/tmp/debug1.txt
tmux new -d -s data_vis_client 'cd /var/data_vis/client/ && npx serve -l 3001 -s build' &>> /tmp/debug1.txt && echo "--- Line 4 OK" >>/tmp/debug1.txt
And my result was (I've removed some lines when installing tmux):
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
libevent-2.1-6 libutempter0
...... removed some lines for better readability ......
Processing triggers for man-db (2.8.5-2) ...
Processing triggers for libc-bin (2.28-10) ...
--- Line 1 OK
--- Line 2 OK
--- Line 3 OK
--- Line 4 OK
Conclusion: your script ran at the start of the VM. You just need to figure out how to start tmux sessions & server and then use it as startup script.

/bin/bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory - error during CocoaPods installation

I am trying to initialize a React-Native project, and following the docs, I did:
brew install node
brew install watchman
sudo gem install cocoapods
Also uninstalling react-native-cli, as suggested in the docs here.
Nothing wrong with those steps, until:
npx react-native init AwesomeProject
Then I got a error message:
###### ######
### #### #### ###
## ### ### ##
## #### ##
## #### ##
## ## ## ##
## ### ### ##
## ######################## ##
###### ### ### ######
### ## ## ## ## ###
### ## ### #### ### ## ###
## #### ######## #### ##
## ### ########## ### ##
## #### ######## #### ##
### ## ### #### ### ## ###
### ## ## ## ## ###
###### ### ### ######
## ######################## ##
## ### ### ##
## ## ## ##
## #### ##
## #### ##
## ### ### ##
### #### #### ###
###### ######
Welcome to React Native!
Learn once, write anywhere
✔ Downloading template
✔ Copying template
✔ Processing template
⠏ Installing CocoaPods dependencies (this may take a few minutes)Analyzing dependencies
Fetching podspec for `DoubleConversion` from `../node_modules/react-native/third-party-podspecs/DoubleConversion.podspec`
Fetching podspec for `Folly` from `../node_modules/react-native/third-party-podspecs/Folly.podspec`
Fetching podspec for `glog` from `../node_modules/react-native/third-party-podspecs/glog.podspec`
Downloading dependencies
Installing DoubleConversion (1.1.6)
Installing FBLazyVector (0.61.5)
Installing FBReactNativeSpec (0.61.5)
Installing Folly (2018.10.22.00)
Installing RCTRequired (0.61.5)
Installing RCTTypeSafety (0.61.5)
Installing React (0.61.5)
Installing React-Core (0.61.5)
Installing React-CoreModules (0.61.5)
Installing React-RCTActionSheet (0.61.5)
Installing React-RCTAnimation (0.61.5)
Installing React-RCTBlob (0.61.5)
Installing React-RCTImage (0.61.5)
Installing React-RCTLinking (0.61.5)
Installing React-RCTNetwork (0.61.5)
Installing React-RCTSettings (0.61.5)
Installing React-RCTText (0.61.5)
Installing React-RCTVibration (0.61.5)
Installing React-cxxreact (0.61.5)
Installing React-jsi (0.61.5)
Installing React-jsiexecutor (0.61.5)
Installing React-jsinspector (0.61.5)
Installing ReactCommon (0.61.5)
Installing Yoga (1.14.0)
Installing boost-for-react-native (1.63.0)
Installing glog (0.3.5)
[!] /bin/bash -c
set -e
#!/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
set -e
PLATFORM_NAME="${PLATFORM_NAME:-iphoneos}"
CURRENT_ARCH="${CURRENT_ARCH}"
if [ -z "$CURRENT_ARCH" ] || [ "$CURRENT_ARCH" == "undefined_arch" ]; then
# Xcode 10 beta sets CURRENT_ARCH to "undefined_arch", this leads to incorrect linker arg.
# it's better to rely on platform name as fallback because architecture differs between simulator and device
if [[ "$PLATFORM_NAME" == *"simulator"* ]]; then
CURRENT_ARCH="x86_64"
else
CURRENT_ARCH="armv7"
fi
fi
export CC="$(xcrun -find -sdk $PLATFORM_NAME cc) -arch $CURRENT_ARCH -isysroot $(xcrun -sdk $PLATFORM_NAME --show-sdk-path)"
export CXX="$CC"
# Remove automake symlink if it exists
if [ -h "test-driver" ]; then
rm test-driver
fi
./configure --host arm-apple-darwin
# Fix build for tvOS
cat << EOF >> src/config.h
/* Add in so we have Apple Target Conditionals */
#ifdef __APPLE__
#include <TargetConditionals.h>
#include <Availability.h>
#endif
/* Special configuration for AppleTVOS */
#if TARGET_OS_TV
#undef HAVE_SYSCALL_H
#undef HAVE_SYS_SYSCALL_H
#undef OS_MACOSX
#endif
/* Special configuration for ucontext */
#undef HAVE_UCONTEXT_H
#undef PC_FROM_UCONTEXT
#if defined(__x86_64__)
#define PC_FROM_UCONTEXT uc_mcontext->__ss.__rip
#elif defined(__i386__)
#define PC_FROM_UCONTEXT uc_mcontext->__ss.__eip
#endif
EOF
# Prepare exported header include
EXPORTED_INCLUDE_DIR="exported/glog"
mkdir -p exported/glog
cp -f src/glog/log_severity.h "$EXPORTED_INCLUDE_DIR/"
cp -f src/glog/logging.h "$EXPORTED_INCLUDE_DIR/"
cp -f src/glog/raw_logging.h "$EXPORTED_INCLUDE_DIR/"
cp -f src/glog/stl_logging.h "$EXPORTED_INCLUDE_DIR/"
cp -f src/glog/vlog_is_on.h "$EXPORTED_INCLUDE_DIR/"
/bin/bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory
✖ Installing CocoaPods dependencies (this may take a few minutes)
error Error: Failed to install CocoaPods dependencies for iOS project, which is required by this template.
Please try again manually: "cd ./AwesomeProject/ios && pod install".
CocoaPods documentation: https://cocoapods.org/
I tried to follow the recommendation and run
cd ./AwesomeProject/ios && pod install
But got another error here:
Analyzing dependencies
Fetching podspec for `DoubleConversion` from `../node_modules/react-native/third-party-podspecs/DoubleConversion.podspec`
Fetching podspec for `Folly` from `../node_modules/react-native/third-party-podspecs/Folly.podspec`
Fetching podspec for `glog` from `../node_modules/react-native/third-party-podspecs/glog.podspec`
Downloading dependencies
Installing DoubleConversion (1.1.6)
Installing FBLazyVector (0.61.5)
Installing FBReactNativeSpec (0.61.5)
Installing Folly (2018.10.22.00)
Installing RCTRequired (0.61.5)
Installing RCTTypeSafety (0.61.5)
Installing React (0.61.5)
Installing React-Core (0.61.5)
Installing React-CoreModules (0.61.5)
Installing React-RCTActionSheet (0.61.5)
Installing React-RCTAnimation (0.61.5)
Installing React-RCTBlob (0.61.5)
Installing React-RCTImage (0.61.5)
Installing React-RCTLinking (0.61.5)
Installing React-RCTNetwork (0.61.5)
Installing React-RCTSettings (0.61.5)
Installing React-RCTText (0.61.5)
Installing React-RCTVibration (0.61.5)
Installing React-cxxreact (0.61.5)
Installing React-jsi (0.61.5)
Installing React-jsiexecutor (0.61.5)
Installing React-jsinspector (0.61.5)
Installing ReactCommon (0.61.5)
Installing Yoga (1.14.0)
Installing boost-for-react-native (1.63.0)
Installing glog (0.3.5)
[!] /bin/bash -c
set -e
#!/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
set -e
PLATFORM_NAME="${PLATFORM_NAME:-iphoneos}"
CURRENT_ARCH="${CURRENT_ARCH}"
if [ -z "$CURRENT_ARCH" ] || [ "$CURRENT_ARCH" == "undefined_arch" ]; then
# Xcode 10 beta sets CURRENT_ARCH to "undefined_arch", this leads to incorrect linker arg.
# it's better to rely on platform name as fallback because architecture differs between simulator and device
if [[ "$PLATFORM_NAME" == *"simulator"* ]]; then
CURRENT_ARCH="x86_64"
else
CURRENT_ARCH="armv7"
fi
fi
export CC="$(xcrun -find -sdk $PLATFORM_NAME cc) -arch $CURRENT_ARCH -isysroot $(xcrun -sdk $PLATFORM_NAME --show-sdk-path)"
export CXX="$CC"
# Remove automake symlink if it exists
if [ -h "test-driver" ]; then
rm test-driver
fi
./configure --host arm-apple-darwin
# Fix build for tvOS
cat << EOF >> src/config.h
/* Add in so we have Apple Target Conditionals */
#ifdef __APPLE__
#include <TargetConditionals.h>
#include <Availability.h>
#endif
/* Special configuration for AppleTVOS */
#if TARGET_OS_TV
#undef HAVE_SYSCALL_H
#undef HAVE_SYS_SYSCALL_H
#undef OS_MACOSX
#endif
/* Special configuration for ucontext */
#undef HAVE_UCONTEXT_H
#undef PC_FROM_UCONTEXT
#if defined(__x86_64__)
#define PC_FROM_UCONTEXT uc_mcontext->__ss.__rip
#elif defined(__i386__)
#define PC_FROM_UCONTEXT uc_mcontext->__ss.__eip
#endif
EOF
# Prepare exported header include
EXPORTED_INCLUDE_DIR="exported/glog"
mkdir -p exported/glog
cp -f src/glog/log_severity.h "$EXPORTED_INCLUDE_DIR/"
cp -f src/glog/logging.h "$EXPORTED_INCLUDE_DIR/"
cp -f src/glog/raw_logging.h "$EXPORTED_INCLUDE_DIR/"
cp -f src/glog/stl_logging.h "$EXPORTED_INCLUDE_DIR/"
cp -f src/glog/vlog_is_on.h "$EXPORTED_INCLUDE_DIR/"
/bin/bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory
I tried to do it the old way too, by re-installing react-native-cli, but get the same error.
Can someone please help explain what's the problem (bash / glog / some configuration file?
And how should I fix this?
My environment is currently:
MacOS Catalina 10.15.3
XCode cersion 11.3.1 (11C504)
Node version 13.8.0
npm version 6.13.6
Homebrew version 2.2.7
Gem version 3.1.2
ruby 2.6.3p62 (2019-04-16 revision 67580) [universal.x86_64-darwin19]
Your help is greatly appreciated ! :) thank you
Finally someone solved it with Sublime Text editor, since converting the file with dos2unix and vim didn't work somehow. Please follow the link here.

SSL Certificate error while running python -m nltk.downloader -d $NLTK_DATA punkt command on aws lambda

Getting SSL Certificate error while deploying the following code on aws lambda using aws codestar build pipeline.
Looked at multiple community discussions, nothing worked out.
version: 0.2
phases:
install:
commands:
# Upgrade AWS CLI & PIP to the latest version
- pip install --upgrade awscli
- pip install --upgrade pip
# Define Directories
- export HOME_DIR=`pwd`
- export NLTK_DATA=$HOME_DIR/nltk_data
pre_build:
commands:
- cd $HOME_DIR
# Create VirtualEnv to package for lambda
- virtualenv venv
- . venv/bin/activate
# Install Supporting Libraries
- pip install -U scikit-learn
- pip install -U requests
# Install WordNet
- pip install -U nltk
- python -m nltk.downloader -d $NLTK_DATA punkt
# Output Requirements
- pip freeze > requirements.txt
# Discover and run unit tests in the 'tests' directory. For more information, see <https://docs.python.org/3/library/unittest.html#test-discovery>
- python -m unittest discover tests
build:
commands:
- cd $HOME_DIR
- mv $VIRTUAL_ENV/lib/python3.6/site-packages/* .
Only way that worked for me was download the modules and install them into my source folder in a nltk_data folder then create a lambda environment variable NLTK with value ./nltk_data

vagrant path to shell script windows

There is a problem with the path to a shell script on windows.
On linux, the Vagrantfile works fine.
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.provider :virtualbox do |vb|
vb.name = "studstat_dev"
end
config.vm.network "forwarded_port", guest: 8000, host: 8888
config.vm.network "forwarded_port", guest: 5432, host: 15432
# Provisioning mit Shell-Script
config.vm.provision "shell", path: "studstat.sh"
end
studstat.sh
#!/bin/sh -e
# Edit the following to change the name of the database user that will be created:
APP_DB_USER=vagrant
APP_DB_PASS=vagrant
# Edit the following to change the name of the database that is created (defaults to the user name)
APP_DB_NAME=studstat
# Edit the following to change the version of PostgreSQL that is installed
PG_VERSION=9.4
export DEBIAN_FRONTEND=noninteractive
PG_REPO_APT_SOURCE=/etc/apt/sources.list.d/pgdg.list
if [ ! -f "$PG_REPO_APT_SOURCE" ]
then
# Add PG apt repo:
echo "deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main" > "$PG_REPO_APT_SOURCE"
# Add PGDG repo key:
wget --quiet -O - http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc | apt-key add -
fi
# Update package list and upgrade all packagesini
apt-get update
apt-get -y upgrade
# install packages for postgres + python3
apt-get -y install "postgresql-$PG_VERSION" "postgresql-contrib-$PG_VERSION"
apt-get -y install vim git python3-setuptools python3-dev libpq-dev python3-pip
# install virtualenv via pip3, not yet in ubuntu repository (20.11.15)
pip3 install virtualenv
PG_CONF="/etc/postgresql/$PG_VERSION/main/postgresql.conf"
PG_HBA="/etc/postgresql/$PG_VERSION/main/pg_hba.conf"
PG_DIR="/var/lib/postgresql/$PG_VERSION/main"
# Edit postgresql.conf to change listen address to '*':
sed -i "s/#listen_addresses = 'localhost'/listen_addresses = '*'/" "$PG_CONF"
# Append to pg_hba.conf to add password auth:
echo "host all all all md5" >> "$PG_HBA"
# Explicitly set default client_encoding
echo "client_encoding = utf8" >> "$PG_CONF"
# generate locales
locale-gen de_DE.UTF-8
update-locale LANG=de_DE.UTF-8
# Restart so that all new config is loaded:
service postgresql restart
cat << EOF | su - postgres -c psql
-- Create the database user:
CREATE USER $APP_DB_USER WITH PASSWORD '$APP_DB_PASS';
-- Create the database:
CREATE DATABASE $APP_DB_NAME WITH OWNER=$APP_DB_USER
LC_COLLATE='de_DE.UTF-8'
LC_CTYPE='de_DE.UTF-8'
ENCODING='UTF8'
TEMPLATE=template0;
EOF
The script is in the same folder as the Vagrantfile. Relevant output of vagrant shell:
==> default: Running provisioner: shell...
default: Running: C:Users/hema0004/AppData/Local/Temp/vagrant-shell20160201-2732-1v2m7qa.sh
==> default: gpg: no valid OpenPGP data found.
The SSH command responded with a non-zero exit status. Vagrant assumes, that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong.
The given path to log does not exist on my mashine.
Without shell script, vagrant works fine, any ideas?
[Edit]: I can access the machine but I can not execute the script from within the machine:
/bin/sh: 0: Illegal option -
Thx, martin
The problem was that the box had no access to the internet because of missing environment variables. These variables should have been provided via the vagrant-proxyconf plugin. The underlaying problem was, that path to $VAGRANT_HOME where the proxy configuration is provided was wrong.
can you try from the following repo
# Add PGDG repo key:
apt-get install -y ca-certificates wget
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -

Installation of camlp4 with opam on OS X 10.9 failed

I do not understand why this installation on a fresh opam installation failed.
Fresh means that there was no .opam directory
I run opam init
and then opam install ocamlfind, which worked
followed by opam install camlp4, which failed.
what's wrong?
$ opam install camlp4
The following actions will be performed:
- install camlp4.4.02.1+system
=== + 1 ===
=-=- Synchronizing package archives -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 🐫
=-=- Installing packages =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 🐫
Copying ~/.opam/repo/default/packages/camlp4/camlp4.4.02.1+system/files/install to ~/.opam/system/build/camlp4.4.02.1+system/
Copying ~/.opam/repo/default/packages/camlp4/camlp4.4.02.1+system/files/check-camlp4.sh to ~/.opam/system/build/camlp4.4.02.1+system/
Building camlp4.4.02.1+system:
sh ./check-camlp4.sh
[ERROR] The compilation of camlp4.4.02.1+system failed.
Removing camlp4.4.02.1+system.
Nothing to do.
#=== ERROR while installing camlp4.4.02.1+system ==============================#
# opam-version 1.2.0
# os darwin
# command sh ./check-camlp4.sh
# path $home/.opam/system/build/camlp4.4.02.1+system
# compiler system (4.02.1)
# exit-code 1
# env-file $home/.opam/system/build/camlp4.4.02.1+system/camlp4-94259-58c514.env
# stdout-file $home/.opam/system/build/camlp4.4.02.1+system/camlp4-94259-58c514.out
# stderr-file $home/.opam/system/build/camlp4.4.02.1+system/camlp4-94259-58c514.err
### stdout ###
# ...[truncated]
# 4.02 by switching to a local installation via `opam switch 4.02.1`.
#
# Here are some installation instructions for camlp4 if you obtained OCaml
# via the OPAM binary packages:
#
# http://software.opensuse.org/download.html?project=home%3Aocaml&package=ocaml
#
# * Debian/Ubuntu: sudo apt-get install camlp4-extra
# * RHEL/CentOS/Fedora: sudo yum install ocaml-camlp4
#
### stderr ###
# ./check-camlp4.sh: line 3: camlp4orf: command not found
Actually the answer is already contained in the OPAM output. Just to clarify, you're using system compiler, i.e., a compiler that is already installed on your operating system (using macports or brew). That means, that camlp4 being de facto a part of compiler, is needed to be installed from the system too. So, you need either install it using your package manager, e.g.,
sudo port install ocaml-camlp4
or just switch to a local installation (the recommended way). This will require you to create a new compiler installation,
opam switch 4.02.1
eval `opam config env`
And afterwards everything will work as a charm.

Resources