friends.
Yesterday I used the below python piece of code to retrieve some comments on youtube videos sucessfully:
!pip install --upgrade google-api-python-client
import os
import googleapiclient.discovery
DEVELOPER_KEY = "my_key"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
youtube = googleapiclient.discovery.build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, developerKey=DEVELOPER_KEY)
youtube
It seems that the build function is suddenly not working. I have even refreshed the API, but in Google Colab I keep receiving the following error message:
UnknownApiNameOrVersion Traceback (most recent call last)
<ipython-input-21-064a9ae417b9> in <module>()
13
14
---> 15 youtube = googleapiclient.discovery.build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, developerKey=DEVELOPER_KEY)
16 youtube
17
1 frames
/usr/local/lib/python3.6/dist-packages/googleapiclient/discovery.py in build(serviceName, version, http, discoveryServiceUrl, developerKey, model, requestBuilder, credentials, cache_discovery, cache, client_options)
241 raise e
242
--> 243 raise UnknownApiNameOrVersion("name: %s version: %s" % (serviceName, version))
244
245
UnknownApiNameOrVersion: name: youtube version: V3
If anyone could help. I´m using this type of authentication because I dont know to put the credentials file in google drive and open it in Colab. But it worked yesterday:
Results for yesterday´s run
Thank you very much in advance. And sorry for anything, Im new in the community.
Regards
The problem is on the server side as discussed here. Until the server problem is fixed, this solution may help (as suggested by #busunkim96):
First, download this json file: https://www.googleapis.com/discovery/v1/apis/youtube/v3/rest
Then:
import json
from googleapiclient import discovery
# Path to the json file you downloaded:
path_json = '/path/to/file/rest'
with open(path_json) as f:
service = json.load(f)
# Replace with your actual API key:
api_key = 'your API key'
yt = discovery.build_from_document(service,
developerKey=api_key)
# Make a request to see whether this works:
request = yt.search().list(part='snippet',
channelId='UCYO_jab_esuFRV4b17AJtAw',
publishedAfter='2020-02-01T00:00:00.000Z',
publishedBefore='2020-04-23T00:00:00.000Z',
order='date',
type='video',
maxResults=50)
response = request.execute()
I was able to resolve this issue by putting making putting static_discovery=False into the build command
Examples:
Previous Code
self.youtube = googleapiclient.discovery.build(API_SERVICE_NAME, API_VERSION, credentials=creds
New Code
self.youtube = googleapiclient.discovery.build(API_SERVICE_NAME, API_VERSION, credentials=creds, static_discovery=False)
For some reason this issue only arised when I compiled my program using Github Actions
Related
I am automating Dialogflow CX using Python client libraries. That includes agent/intent/entity etc. creation/updation/deletion.
But for the first time run, I am encountering the below error from python.
If I login to console and set the location from there and rerun the code, it is working fine. I am able to create agent.
Followed this URL of GCP -
https://cloud.google.com/dialogflow/cx/docs/concept/region
I am looking for code to automate the region & location setting before running the python code. Kindly provide me with the code.
Below is the code I am using to create agent.
Error -
google.api_core.exceptions.FailedPrecondition: 400 com.google.apps.framework.request.FailedPreconditionException: Location settings have to be initialized before creating the agent in location: us-east1. Code: FAILED_PRECONDITION
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.FAILED_PRECONDITION
details = "com.google.apps.framework.request.FailedPreconditionException: Location settings have to be initialized before creating the agent in location: us-east1. Code: FAILED_PRECONDITION"
debug_error_string = "{"created":"#1622183899.891000000","description":"Error received from peer ipv4:142.250.195.170:443","file":"src/core/lib/surface/call.cc","file_line":1068,"grpc_message":"com.google.apps.framework.request.FailedPreconditionException: Location settings have to be initialized before creating the agent in location: us-east1. Code: FAILED_PRECONDITION","grpc_status":9}"
main.py -
# Import Libraries
import google.auth
import google.auth.transport.requests
from google.cloud import dialogflowcx as df
from google.protobuf.field_mask_pb2 import FieldMask
import os, time
import pandas as pd
# Function - Authentication
def gcp_auth():
cred, project = google.auth.default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
auth_req = google.auth.transport.requests.Request()
cred.refresh(auth_req)
# Function - Create Agent
def create_agent(agent_name, agent_description, language_code, location_id, location_path):
if location_id == "global":
agentsClient = df.AgentsClient()
else:
agentsClient = df.AgentsClient(client_options={"api_endpoint": f"{location_id}-dialogflow.googleapis.com:443"})
agent = df.Agent(display_name=agent_name, description=agent_description, default_language_code=language_code, time_zone=time_zone, enable_stackdriver_logging=True)
createAgentRequest = df.CreateAgentRequest(agent=agent, parent=location_path)
agent = agentsClient.create_agent(request=createAgentRequest)
return agent```
Currently, Dialogflow does not support configuring the location settings through the API, thus you can not initialise location settings through it. You can only set the location through the Console.
As an alternative, since the location setting has to be initialised only once for each region per project you could set the location and automate the agent creation process, some useful links: 1 and 2.
On the other hand, if you would find this feature useful, you can file a Feature Request, here. It will be evaluated by the Google's product team.
Many thanks Alexandre Moraes. I have raised a feature request for the same.
I have a Lambda that polls the status of a DataBrew job using a boto3 client. The code - as written here - works fine in my local environment. When I put it into a Lambda function, I get the error:
[ERROR] AttributeError: 'GlueDataBrew' object has no attribute 'describe_job_run'
This is the syntax found in the Boto3 documentation:
client.describe_job_run(
Name='string',
RunId='string')
This is my code:
import boto3
def get_brewjob_status(jobName, jobRunId):
brew = boto3.client('databrew')
try:
jobResponse = brew.describe_job_run(Name=jobName, RunId=jobRunId)
status = jobResponse['State']
except Exception as e:
status='FAILED'
print('Unable to get job status')
raise(e)
return {
'jobStatus':status
}
def lambda_handler(event, context):
jobName=event['jobName']
jobRunId=event['jobRunId']
response=get_brewjob_status(jobName, jobRunId)
return response
I am using the Lambda runtime version of boto3. The jobName and jobRunId variables are strings passed from a Step Function, but I've also tried to hard code them into the Lambda to check the error and I get the same result. I have tried running it on both the runtime Python3.7 and Python3.8 versions. I'm also confident (and have double checked) that the IAMs permissions allow the Lambda access to DataBrew. Thanks for any ideas!
Fixed my own problem. There must be some kind of conflict with the boto3 runtime and databrew - maybe not updated to include databrew yet? I created a .zip deployment package and it worked fine. Should have done that two days ago...
I have been using nipyapi client to manage new Apache NiFi deployments and is working great, but i am getting an issue when trying to ENABLE a Controller Services.
My Setup:
I run NiFi in docker and every time a container starts there is a series of steps such as :
Build NiFi server - OK
Download the temapltes.xml - OK
Upload templates to NiFi - OK
Deploy templates to NiFi Canvas - OK
ENABLE Controller Service - ERROR
import nipyapi
nipyapi.config.nifi_config.host = 'http://localhost:9999/nifi-api'
nipyapi.canvas.get_controller('MariaDB', identifier_type='name', bool_response=False)
#Enable Controler
headers = {'Content-Type': 'application/json'}
url = 'http://localhost:9999/nifi-api/flow/process-groups/'+nipyapi.canvas.get_root_pg_id()+'/controller-services'
r = requests.get(url)
reponse = json.loads(r.text)
controllerId = reponse['controllerServices'][0]['id']
nipyapi.canvas.schedule_controller(controllerId, 'True', refresh=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/nipyapi/canvas.py", line 1222, in schedule_controller
assert isinstance(controller, nipyapi.nifi.ControllerServiceEntity)
AssertionError
Not sure what i am missing !
PS - i have been trying nifi-toolkit but is not working as well
./cli.sh nifi pg-enable-services --processGroupId 2b8b54ca-016b-1000-0655-c3ec484fd81d -u http://localhost:9999 --verbose
Sometimes it works sometimes does not work !
I would like to stick with one tool eg: toolkit or nipyapi (faster)
Any Help would be great ! thx
Per the error, NiPyAPI expects to be passed the Controller object, not just the ID.
I am using Azure databricks and I ran the following Python code:
sas_token = "<my sas key>"
dbutils.fs.mount(
source = "wasbs://<container>#<storageaccount>.blob.core.windows.net",
mount_point = "/mnt/gl",
extra_configs = {"fs.azure.sas.<container>.<storageaccount>.blob.core.windows.net": sas_token})
This seemed to run fine. So I then ran:
df = spark.read.text("/mnt/gl/glAgg_LE.csv")
Which gave me the error:
shaded.databricks.org.apache.hadoop.fs.azure.AzureException: com.microsoft.azure.storage.StorageException: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
Not sure what I'm doing wrong though. I'm pretty sure my sas key is correct.
Ok if you are getting this error - double check both the SAS key and the container name.
Turned out I had pointed it to the wrong container!
I've started to use openshift (free account), successing with python. But I need to install some libraries (requests and others). How to do it? I can't find any docs on it...
Forum's info is obscure... I've followed this thread (for third party libs):
Setup.py
from setuptools import setup
setup(name='Igor YourAppName',
version='1.0',
description='OpenShift App',
author='Igor Savinkin',
author_email='igor.savinkin#gmail.com',
url='http://www.python.org/sigs/distutils-sig/',
install_requires=['requests>=2.0.0'],
)
WSGI.py
def application(environ, start_response):
ctype = 'text/plain'
if environ['PATH_INFO'] == '/health':
response_body = "1"
elif environ['PATH_INFO'] == '/env':
response_body = ['%s: %s' % (key, value)
for key, value in sorted(environ.items())]
response_body = '\n'.join(response_body)
else:
ctype = 'text/html'
import requests
see the last line, where I try to import requests.
This yields in 500 error:
Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request.
Custom python package try
My second try was on this thread:
I've created libs directory in my root dir; then added into wsgi.py:
sys.path.append(os.path.join(os.getenv("OPENSHIFT_REPO_DIR"), "libs"))
and cloned requests into that directory. When I do:
C:\Users\Igor\mypythonapp\libs\requests\requests>git ls-files -c
I get the full list of requests package files... but again, result is 500 error.
You should try reading through this section (https://developers.openshift.com/en/python-deployment-options.html) of the Developer Portal which describes how to install dependencies for Pythong applications on OpenShift Online
you should use requirements.txt. My requirements.txt is below
admin$ cat requirements.txt
Flask==0.10.1
Requests==2.6.0