Install libraries in OpenShift - static-libraries

I've started to use openshift (free account), successing with python. But I need to install some libraries (requests and others). How to do it? I can't find any docs on it...
Forum's info is obscure... I've followed this thread (for third party libs):
Setup.py
from setuptools import setup
setup(name='Igor YourAppName',
version='1.0',
description='OpenShift App',
author='Igor Savinkin',
author_email='igor.savinkin#gmail.com',
url='http://www.python.org/sigs/distutils-sig/',
install_requires=['requests>=2.0.0'],
)
WSGI.py
def application(environ, start_response):
ctype = 'text/plain'
if environ['PATH_INFO'] == '/health':
response_body = "1"
elif environ['PATH_INFO'] == '/env':
response_body = ['%s: %s' % (key, value)
for key, value in sorted(environ.items())]
response_body = '\n'.join(response_body)
else:
ctype = 'text/html'
import requests
see the last line, where I try to import requests.
This yields in 500 error:
Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request.
Custom python package try
My second try was on this thread:
I've created libs directory in my root dir; then added into wsgi.py:
sys.path.append(os.path.join(os.getenv("OPENSHIFT_REPO_DIR"), "libs"))
and cloned requests into that directory. When I do:
C:\Users\Igor\mypythonapp\libs\requests\requests>git ls-files -c
I get the full list of requests package files... but again, result is 500 error.

You should try reading through this section (https://developers.openshift.com/en/python-deployment-options.html) of the Developer Portal which describes how to install dependencies for Pythong applications on OpenShift Online

you should use requirements.txt. My requirements.txt is below
admin$ cat requirements.txt
Flask==0.10.1
Requests==2.6.0

Related

Dialogflow CX - Location settings have to be initialized - FAILED_PRECONDITION

I am automating Dialogflow CX using Python client libraries. That includes agent/intent/entity etc. creation/updation/deletion.
But for the first time run, I am encountering the below error from python.
If I login to console and set the location from there and rerun the code, it is working fine. I am able to create agent.
Followed this URL of GCP -
https://cloud.google.com/dialogflow/cx/docs/concept/region
I am looking for code to automate the region & location setting before running the python code. Kindly provide me with the code.
Below is the code I am using to create agent.
Error -
google.api_core.exceptions.FailedPrecondition: 400 com.google.apps.framework.request.FailedPreconditionException: Location settings have to be initialized before creating the agent in location: us-east1. Code: FAILED_PRECONDITION
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.FAILED_PRECONDITION
details = "com.google.apps.framework.request.FailedPreconditionException: Location settings have to be initialized before creating the agent in location: us-east1. Code: FAILED_PRECONDITION"
debug_error_string = "{"created":"#1622183899.891000000","description":"Error received from peer ipv4:142.250.195.170:443","file":"src/core/lib/surface/call.cc","file_line":1068,"grpc_message":"com.google.apps.framework.request.FailedPreconditionException: Location settings have to be initialized before creating the agent in location: us-east1. Code: FAILED_PRECONDITION","grpc_status":9}"
main.py -
# Import Libraries
import google.auth
import google.auth.transport.requests
from google.cloud import dialogflowcx as df
from google.protobuf.field_mask_pb2 import FieldMask
import os, time
import pandas as pd
# Function - Authentication
def gcp_auth():
cred, project = google.auth.default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
auth_req = google.auth.transport.requests.Request()
cred.refresh(auth_req)
# Function - Create Agent
def create_agent(agent_name, agent_description, language_code, location_id, location_path):
if location_id == "global":
agentsClient = df.AgentsClient()
else:
agentsClient = df.AgentsClient(client_options={"api_endpoint": f"{location_id}-dialogflow.googleapis.com:443"})
agent = df.Agent(display_name=agent_name, description=agent_description, default_language_code=language_code, time_zone=time_zone, enable_stackdriver_logging=True)
createAgentRequest = df.CreateAgentRequest(agent=agent, parent=location_path)
agent = agentsClient.create_agent(request=createAgentRequest)
return agent```
Currently, Dialogflow does not support configuring the location settings through the API, thus you can not initialise location settings through it. You can only set the location through the Console.
As an alternative, since the location setting has to be initialised only once for each region per project you could set the location and automate the agent creation process, some useful links: 1 and 2.
On the other hand, if you would find this feature useful, you can file a Feature Request, here. It will be evaluated by the Google's product team.
Many thanks Alexandre Moraes. I have raised a feature request for the same.

YouTube Data API Build Function doesn´t work

friends.
Yesterday I used the below python piece of code to retrieve some comments on youtube videos sucessfully:
!pip install --upgrade google-api-python-client
import os
import googleapiclient.discovery
DEVELOPER_KEY = "my_key"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
youtube = googleapiclient.discovery.build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, developerKey=DEVELOPER_KEY)
youtube
It seems that the build function is suddenly not working. I have even refreshed the API, but in Google Colab I keep receiving the following error message:
UnknownApiNameOrVersion Traceback (most recent call last)
<ipython-input-21-064a9ae417b9> in <module>()
13
14
---> 15 youtube = googleapiclient.discovery.build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, developerKey=DEVELOPER_KEY)
16 youtube
17
1 frames
/usr/local/lib/python3.6/dist-packages/googleapiclient/discovery.py in build(serviceName, version, http, discoveryServiceUrl, developerKey, model, requestBuilder, credentials, cache_discovery, cache, client_options)
241 raise e
242
--> 243 raise UnknownApiNameOrVersion("name: %s version: %s" % (serviceName, version))
244
245
UnknownApiNameOrVersion: name: youtube version: V3
If anyone could help. I´m using this type of authentication because I dont know to put the credentials file in google drive and open it in Colab. But it worked yesterday:
Results for yesterday´s run
Thank you very much in advance. And sorry for anything, Im new in the community.
Regards
The problem is on the server side as discussed here. Until the server problem is fixed, this solution may help (as suggested by #busunkim96):
First, download this json file: https://www.googleapis.com/discovery/v1/apis/youtube/v3/rest
Then:
import json
from googleapiclient import discovery
# Path to the json file you downloaded:
path_json = '/path/to/file/rest'
with open(path_json) as f:
service = json.load(f)
# Replace with your actual API key:
api_key = 'your API key'
yt = discovery.build_from_document(service,
developerKey=api_key)
# Make a request to see whether this works:
request = yt.search().list(part='snippet',
channelId='UCYO_jab_esuFRV4b17AJtAw',
publishedAfter='2020-02-01T00:00:00.000Z',
publishedBefore='2020-04-23T00:00:00.000Z',
order='date',
type='video',
maxResults=50)
response = request.execute()
I was able to resolve this issue by putting making putting static_discovery=False into the build command
Examples:
Previous Code
self.youtube = googleapiclient.discovery.build(API_SERVICE_NAME, API_VERSION, credentials=creds
New Code
self.youtube = googleapiclient.discovery.build(API_SERVICE_NAME, API_VERSION, credentials=creds, static_discovery=False)
For some reason this issue only arised when I compiled my program using Github Actions

Weird "is_xhr" error when deploying Flask app to Heroku

I have a flask app which I've deployed to Heroku, one of the routes is the following
def get_kws():
seed_kw = request.json['firstParam']
audience_max = request.json['secondParam']
interest_mining_service = InterestMiningService(seed_kw, audience_max)
query_result = interest_mining_service.query_keyword().tolist()
if seed_kw in query_result:
print ("yes")
return jsonify(
{
'keyword_data' : interest_mining_service.find_kws().to_json(orient='records'),
'query_results': query_result
}
)
When I test this endpoint locally, I have no issues when sending POST and GET requests to that endpoint. However, when I deploy to Heroku, I get the following error:
File "/app/server/controller.py", line 24, in get_kws
2020-02-08T22:31:05.893850+00:00 app[web.1]: 'query_results': query_result
2020-02-08T22:31:05.893850+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flask/json.py", line 298, in jsonify
2020-02-08T22:31:05.893851+00:00 app[web.1]: if current_app.config['JSONIFY_PRETTYPRINT_REGULAR'] and not request.is_xhr:
2020-02-08T22:31:05.893851+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/werkzeug/local.py", line 347, in __getattr__
2020-02-08T22:31:05.893852+00:00 app[web.1]: return getattr(self._get_current_object(), name)
2020-02-08T22:31:05.893858+00:00 app[web.1]: AttributeError: 'Request' object has no attribute 'is_xhr'
I've never seen this error Request object has no attribute 'is_xhr' before and it only seems to be happening when I deploy to Heroku. Any guidance on what I should look into?
There also doesn't seem to be an issue with the json key keyword_data - the issue seems limited to query_results which is a list.
The Werkzeug library (dependency from Flask) recently received a major update (0.16.1 --> 1.0.0) and it looks like Flask (<=0.12.4) does not restrict the version of Werkzeug that is fetched.
You have 2 options:
Stick with your current version of Flask and restrict the Werkzeug version that is fetched explicitly in your application's setup.py or requirements.txt by specifying werkzeug<1.0 or werkzeug==0.16.1
Upgrade to a recent version of Flask (>=1.0.0), which is running fine with latest Werkzeug
Or you can just forcefully install the bustard again by calling
pip install Werkzeug==0.16.1
I have faced with this problem too.
Just temporarily fixed by directly checking in request header
request.headers.get("X-Requested-With") == "XMLHttpRequest"
not sure this help ...

Define setup.py dependencies from a private PyPI

I'd like to install dependencies from my private PyPI by specifying them within a setup.py.
I've already tried to specify where to find dependencies within the dependency_links this way:
setup(
...
install_requires=["foo==1.0"],
dependency_links=["https://my.private.pypi/"],
...
)
I've also tried to define the entire URL within the dependency_links:
setup(
...
install_requires=[],
dependency_links=["https://my.private.pypi/foo/foo-1.0.tar.gz"],
...
)
but when I try to install with python setup.py install, neither of them worked for me.
Can anybody help me?
EDITS:
With the first piece of code I got this error:
...
Installed .../test-1.0.0-py3.7.egg
Processing dependencies for test==1.0.0
Searching for foo==1.0
Reading https://my.private.pypi/
Reading https://pypi.org/simple/foo/
Couldn't find index page for 'foo' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading https://pypi.org/simple/
No local packages or working download links found for foo==1.0
error: Could not find suitable distribution for Requirement.parse('foo==1.0')
while in the second case I didn't get any error, just the following:
...
Installed .../test-1.0.0-py3.7.egg
Processing dependencies for test==1.0.0
Finished processing dependencies for test==1.0.0
UPDATE 1:
I've tried to change the setup.py following sinoroc's instructions. Now my setup.py looks like this:
setup(
...
install_requires=["foo==1.0"],
dependency_links=["https://username:password#my.private.pypi/folder/foo/foo-1.0.tar.gz"],
...
)
I built the library test with python setup.py sdist and tried to install it with pip install /tmp/test/dist/test-1.0.0.tar.gz, but I still get this error:
Processing /tmp/test/dist/test-1.0.0.tar.gz
ERROR: Could not find a version that satisfies the requirement foo==1.0 (from test==1.0.0) (from versions: none)
ERROR: No matching distribution found for foo==1.0 (from test==1.0.0)
Regarding the private PyPi, I don't have any additional information because I'm not the administrator of it. As you can see, I just have the credentials (username and password) for that server.
Additionally, that PyPi is organised in sub-folders, https://my.private.pypi/folder/.. where the dependency I want to install is.
UPDATE 2:
By running pip install --verbose /tmp/test/dist/test-1.0.0.tar.gz, it seams there is only 1 location where to search for the library foo, in the public server https://pypi.org/simple/foo/ and not in our private server https://my.private.pypi/folder/foo/.
Here the output:
...
1 location(s) to search for versions of foo:
* https://pypi.org/simple/foo/
Getting page https://pypi.org/simple/foo/
Found index url https://pypi.org/simple
Looking up "https://pypi.org/simple/foo/" in the cache
Request header has "max_age" as 0, cache bypassed
Starting new HTTPS connection (1): pypi.org:443
https://pypi.org:443 "GET /simple/foo/ HTTP/1.1" 404 13
Status code 404 not in (200, 203, 300, 301)
Could not fetch URL https://pypi.org/simple/foo/: 404 Client Error: Not Found for url: https://pypi.org/simple/foo/ - skipping
Given no hashes to check 0 links for project 'foo': discarding no candidates
ERROR: Could not find a version that satisfies the requirement foo==1.0 (from test==1.0.0) (from versions: none)
Cleaning up...
Removing source in /private/var/...
Removed build tracker '/private/var/...'
ERROR: No matching distribution found for foo==1.0 (from test==1.0.0)
Exception information:
Traceback (most recent call last):
...
In your second attempt, I believe you should still have foo==1.0 in the install_requires.
Update
Be aware that pip does not support dependency_links (it used to, but does not anymore).
For pip, the alternative is to use command line options such as --index-url, --extra-index-url, or --find-links. These options can not be enforced on the user of your project (contrary to the dependency links from setuptools), so they have to be properly documented. To facilitate this, a good idea is to provide an example of a requirements.txt file to the users of your project. This file can contain some of pip options.
For example:
# requirements.txt
# ...
--find-links 'https://my.private.pypi/'
foo==1.0
# ...

How can I start the eventserver?

I have installed predictionio through brew on my osx ( Maverick ) and i can start the admin's service (http://0.0.0.0:9000 ) and the api's server (http://0.0.0.0:8000).
But reading the docs, with the ruby's sdk, says:
# Create a client object.
client = PredictionIO::EventClient.new(<ACCESS KEY>, <URL OF EVENTSERVER>)
At first, i have inserted the api's url, but reading other docs ( like the python's sdk ) says that the eventserver runs on http://0.0.0.0:7070.
If i try to create a event:
client.create_event('rate', 'user', rate.user_id, { 'targetEntityType'=> 'item', 'targetEntityId' => rate.rateable_id, 'properties'=> {'rating'=> 3 }})
it always return the same response: 'PredictionIO::EventClient::NotCreatedError: Your request is not supported'
The guide says that the command to run this server is:
pio eventserver
But I don't have this bin. I start everything with the script 'predicitonio-start-all.sh' but with this I can't start this event server.
Thanks in advance !!
The Homebrew script is maintained by community and has not yet been updated to 0.8.4 yet. It is using 0.7.3 (http://braumeister.org/formula/predictionio) which does not work with the current documentation.
Please follow the instructions here to install the latest version: http://docs.prediction.io/install/

Resources