im trying to run "zappa init" on my ubuntu server on aws also conda is installed.... when i init zappa its asking for "active virtual env" - zappa

Oh no! An error occurred! :(
==============
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/zappa/cli.py", line 2693, in handle
sys.exit(cli.handle())
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/zappa/cli.py", line 475, in handle
self.init()
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/zappa/cli.py", line 1534, in init
self.check_venv()
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/zappa/cli.py", line 2630, in check_venv
"Learn more about virtual environments here: " + click.style("http://docs.python-guide.org/en/latest/dev/virtualenvs/", bold=False, fg="cyan"))
click.exceptions.ClickException: Zappa requires an active virtual environment!
Learn more about virtual environments here: http://docs.python-guide.org/en/latest/dev/virtualenvs/
==============
Need help? Found a bug? Let us know! :D
File bug reports on GitHub here: https://github.com/Miserlou/Zappa
And join our Slack channel here: https://slack.zappa.io
Love!, ~ Team Zappa!

Judging from the traceback, it seems you are not using virtual envs in your Zappa deployment. Zappa needs a virtual environment before you can deploy any AWS Lambda package.
Read more about how to create virtual environments here: https://virtualenv.pypa.io/en/stable/
So basically, on your project folder you will have to run:
(your project dir)$ pip install virtualenv
(your project dir)$ virtualenv venv
(your project dir)$ source venv/bin/activate
(your project dir)$ pip install requirements.txt
(your project dir)$ pip install zappa
(your project dir)$ zappa init
(your project dir)$ zappa deploy

Related

Trying to install a Kafka Connect connector via a jar file using Docker, installation gives errors

I have am using docker-compose along with a Dockerfile to install a connector. I have been successful in installing connectors from Confluent Hub, but not my own jar files.
Here is what I did:
Went to https://search.maven.org/artifact/io.aiven/aiven-kafka-connect-gcs/0.7.0/jar and in the upper right corner, pressed Downloads and clicked on "jar"
Placed this file in the same folder as my Dockerfile
Ran my Dockerfile:
ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components"
USER root
COPY --chown=appuser:appuser aiven-kafka-connect-gcs-0.7.0.jar /usr/share/confluent-hub-components
USER appuser
RUN confluent-hub install --no-prompt aiven/kafka-connect-gcs:0.7.0
I have also tried various confluent-hub install commands, including:
RUN confluent-hub install --no-prompt aiven-kafka-connect-gcs:0.7.0
RUN confluent-hub install --no-prompt confluent-hub-components/aiven-kafka-connect-gcs-0.7.0.jar
RUN confluent-hub install --no-prompt aiven-kafka-connect-gcs-0.7.0.jar
all to no avail. I did try other directories like /etc/kafka-connect/jars and I just keep getting the same issue.
What am I doing wrong? Syntax? Missing additional mounting commands? Something else?
confluent-hub doesn't "install" local JAR files
By default, it uses its arguments to do an HTTP lookup against the Confluent Hub website and return the according response. If it's a valid connector, it'll extract it to the plugin path, otherwise, you'll get an error
If you give it a local ZIP, that will work
path to a local ZIP file that was downloaded from Confluent Hub
This is how I did it:
RUN wget -O /set/your/path/here/<connector name here>.tar https://url-of-connector-here/<connector name and version here>.tar
RUN tar -xvf /set/your/path/here/<connector name here>.tar --directory /path/to/connect/plugins/here/
and it worked.

Download chocolatey packages with dependencies but install it later

I´ve got a question about the package installation manager 'chocolatey'. The use case is that I want to download packages from a feed (hosted on azure devOps) including dependencies and save them somewhere on my computer. So I could install these packages later from a local source.
Is it possible to do so? If yes how can I do this
Thanks for your effort! For Further questions don't hesitate to comment my question
Steps 1:
Download artifact from the Azure DevOps feed and save it on the computer.
1. Open power shell and login DevOps via the code az login
2. Please refer to this doc and run the code to download the artifact from the Azure DevOps feed.
az artifacts universal download --organization https://dev.azure.com/{org} --feed feed name --name package name --version verion ID --path .
Step 2:
In this step, you can call chocolatey-install-package command to achieve installing local package. Just run this script in command line task.

pycharm docker-compose debug

I recently started working on a project which uses docker-compose and consists of multiple services, therefore installing and debugging locally has been a problem. I started looking for a way to debug with docker-compose and came across this piece of documentation
While this explains how to configure an interpreter using Django, I use Sanic for the project and therefore can't follow the tutorial to a T. Could you advice on the template for Run/Debug configuration using docker-compose?
I also read this post but it links to the aforementioned documentation.
I believe that most of the documentation should readily work with Sanic, if you only modify the Dockerfile slightly:
FROM python:3.7
WORKDIR /app
# By copying over requirements first, we make sure that Docker will cache
# our installed requirements rather than reinstall them on every build
COPY requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
# Now copy in our code, and run it
COPY . /app
EXPOSE 8000
CMD ["python", "main.py"]
Then in main.py:
from sanic import Sanic
app = Sanic("MyApp")
# ...
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8000)
I would suggest adding a Remote Python Environment with Docker Compose. Very useful for debugging. From Preferences -> Project Interpreter -> Add... -> Docker Compose. You should select your Docker Compose file. Afterwards you can simply run/debug your main.py by selecting your Remote Python Interpreter (you can select it from your Run/Debug Configurations).
PS: I build the docker-compose file before running the application from Pycharm. I have experienced some bugs when I run the compose file directly from Pycharm.

Install packages in conda terminal jupyter amazon sagemaker

I have a project in Amazon-Sage-Maker. For this, I have to uninstall specific packages and install others in the terminal. But every time I close or stop the instance I have to go to the terminal and make all the installations again. Why is this happening?
The package with which I am experimenting with this trouble is psycopg2:
import psycopg2
Gives me a warning that suggests that I should uninstall it and install psycopg2-binary.
So I open the terminal and code:
pip uninstall psycopg2
Then in the notebook, I code:
import psycopg2
And have no problem, but if I close and open the instance back, I get the same error and have to go through all the process again.
Thanks for using SageMaker. The packages installed are not persistent when you restart the Notebook Instance. To avoid manually installing it every time, you can create a Lifecycle Config which installs your packages and attach it to you Notebook Instance. Script in Lifecycle Config will be run every time you restart your Notebook Instance.
For more information on how to use Lifecycle Config you can check out:
https://aws.amazon.com/blogs/machine-learning/customize-your-amazon-sagemaker-notebook-instances-with-lifecycle-configurations-and-the-option-to-disable-internet-access/
#anitasp, You have to create a Docker image, by doing the following:
Be sure to set up your SageMaker Execution Role Policy permissions on AWS IAM (besides S3) and also AmazonEC2ContainerServiceFullAccess, AmazonEC2ContainerRegistryFullAccess and AmazonSageMakerFullAccess.
Create and start instance in SageMaker and Open notebook. Clone the directory structure shown here at your instance: https://github.com/RubensZimbres/Repo-2018/tree/master/AWS%20SageMaker/Jupyter-Folder
Inside Jupyter, run:
! sudo service docker start
! sudo usermod -a -G docker ec2-user
! docker info
! chmod +x decision_trees/train
! chmod +x decision_trees/serve
! aws ecr create-repository --repository-name decision-trees
! aws ecr get-login --no-include-email
Copy and paste the login in the command line below
! docker login -u abc -p abc12345 http://abc123
Run
! docker build -t decision-trees .
! docker tag decision-trees your_aws_account_id.dkr.ecr.us-east-1.amazonaws.com/decision-trees:latest
! docker push your_aws_account_id.dkr.ecr.us-east-1.amazonaws.com/decision-trees:latest
! aws ecs register-task-definition --cli-input-json file://decision-trees-task-def.json
And adapt to your needs, according to the algorithm of your choice. You will need the Dockerfile, hyperparameters.json, etc.
The documented project is here: https://github.com/RubensZimbres/Repo-2018/tree/master/AWS%20SageMaker
By default, python packages installed from a Notebook Instance will not be persisted to the next Notebook Instance session. One solution for this problem is to:
1) Create (or clone from a current conda env) a new conda environment into /home/ec2-user/SageMaker, which is persisted between sessions. For example:
conda create --prefix /home/ec2-user/SageMaker/envs/custom-environment --clone tensorflow_p36
2) Next, create a new Lifecycle Configuration for “start notebook” with the following contents:
#!/bin/bash
sudo -u ec2-user -i <<'EOF'
ln -s /home/ec2-user/SageMaker/envs/custom-environment /home/ec2-user/anaconda3/envs/custom-environment
EOF
3) Finally, attach the Lifecycle Configuration to your Notebook Instance
Now, when you restart your Notebook Instance, your custom environment will be detected by conda and Jupyter. Any new packages you install to this environment will be persisted between sessions and then soft-linked back to conda at startup.

Installing MULE ESB mule-standalone-3.3.1

Can some guide me .. for installing Mule ESB(mule-standalone-3.3.1) in Ubuntu . I am unable to find any documentation for installing. i want to automate it through Chef.
It's can be as simple as downloading and unpacking the archive file from: http://dist.codehaus.org/mule/distributions/mule-standalone-3.3.1.zip
Note: You need jdk 6/7 installed first.
Here's a chef cookbook that does this: https://github.com/ryandcarter/mule-cookbook
And here's a Vagrant script for running the mule cookbook on ubuntu etc: https://github.com/ryandcarter/vagrant-mule
It is very simple.
Download and unpacking the archive file from: http://dist.codehaus.org/mule/distributions/mule-standalone-3.3.1.zip or whatever version you want to install.
put this unpack file to anywhere where you want like /opt/ or /usr/local/
put you mule application inside apps folder.
& go to bin directory and run ./mule start command. Now mule server is running. You can also check mule log inside log folder mule.log file
This is an old question, but in case there are others who are looking.
You want to install Mule as a Ubuntu Service, so that it restarts when The server restarts. There are a couple of basic steps to this
I detailed out instructions and installation files at my github repository
https://github.com/jamesontriplett/mule_linux_service_install
Steps in general:
Install a startup script in /etc/init.d
Install a startup parameter file in /etc/mule
Customize parameters in the wrapper.conf file in /conf/wrapper.conf
Install the license file onto the server if using enterprise
Add the startup script to the run levels.
To test, you want to reboot the linux service to make sure that it will come back after a reboot. If it doesn't you have a reliability issue.

Resources