Ajax data element is array causing Python Unexpected query string parameters - ajax

I have an Ajax call to a backend Python program with this calling sequence:
var dataArray = {
"company_id":company_id,
"plan_id":plan_id,
"demographics_to_save":demographics_to_save,
"takeup":takeup,
"number_of_employees_with_dependents":total_deps,
"percentMale":male_pct / 100,
"percentEmployeesWithDependents":deps_pct / 100
};
$.ajax({
url: "http://ec2.xx.xx.xx.compute-1.amazonaws.com:8080/save_census_manual",
dataType: 'json',
async: false,
data: dataArray,
success: function(data) {
company_id = data.company_id;
plan_id = data.plan_id;
} // success
});
The variable: demographics_to_save is a nested array that is structured as seen in the Chrome Dev Tools Console:
dataArray
{company_id: 84, plan_id: 61, demographics_to_save: Array(10), takeup: "0.68", number_of_employees_with_dependents: 2, …}company_id: 84demographics_to_save: Array(10)0: Array(4)0: 21: 12: 13: 1length: 4__proto__: Array(0)1: (4) [0, 0, 0, 0]0: 01: 02: 03: 0length: 4__proto__: Array(0)2: (4) [0, 0, 0, 0]3: (4) [0, 0, 0, 0]4: (4) [0, 0, 0, 0]5: (4) [0, 0, 0, 0]6: (4) [0, 0, 0, 0]7: (4) [0, 0, 0, 0]0: 01: 02: 03: 0length: 4__proto__: Array(0)8: (4) [0, 0, 0, 0]9: (4) [0, 0, 0, 0]length: 10__proto__: Array(0)number_of_employees_with_dependents: 2percentEmployeesWithDependents: 0.67percentMale: 0.67plan_id: 61takeup: "0.68"proto: Object
When the Ajax call is invoked, the Python responds with:
404 Not Found
Unexpected query string parameters: demographics_to_save[3][], demographics_to_save[1][], demographics_to_save[2][], demographics_to_save[7][], demographics_to_save[8][], demographics_to_save[6][], demographics_to_save[0][], demographics_to_save[9][], demographics_to_save[5][], demographics_to_save[4][]
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/cherrypy/_cpdispatch.py", line 60, in call
return self.callable(*self.args, **self.kwargs)
TypeError: save_census_manual() got an unexpected keyword argument 'demographics_to_save[2][]'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/cherrypy/_cprequest.py", line 631, in respond
self._do_respond(path_info)
File "/usr/local/lib/python3.5/dist-packages/cherrypy/_cprequest.py", line 690, in _do_respond
response.body = self.handler()
File "/usr/local/lib/python3.5/dist-packages/cherrypy/lib/encoding.py", line 221, in call
self.body = self.oldhandler(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/cherrypy/_cpdispatch.py", line 66, in call
raise sys.exc_info()[1]
File "/usr/local/lib/python3.5/dist-packages/cherrypy/_cpdispatch.py", line 64, in call
test_callable_spec(self.callable, self.args, self.kwargs)
File "/usr/local/lib/python3.5/dist-packages/cherrypy/_cpdispatch.py", line 197, in test_callable_spec
raise cherrypy.HTTPError(404, message=message)
cherrypy._cperror.HTTPError: (404, 'Unexpected query string parameters: demographics_to_save[3][], demographics_to_save[1][], demographics_to_save[2][], demographics_to_save[7][], demographics_to_save[8][], demographics_to_save[6][], demographics_to_save[0][], demographics_to_save[9][], demographics_to_save[5][], demographics_to_save[4][]')
The Python code is a simple function
def save_census_manual(company_id,plan_id,demographics_to_save,takeup,number_of_employees_with_dependents,percentMale,percentEmployeesWithDependents):
some code
return
I'd like to know why all the unexpected query string parameter errors are occurring (see above).
I am using a Python Web Development Framework - CherryPy
When Cherrypy creates the call to the "offending" function, the call looks like this:
http://ec2-xx-xx-xx-xx.compute-1.amazonaws.com:8080/save_census_manual?company_id=84&plan_id=61&demographics_to_save=%5B0%5D%5B%5D=2&demographics_to_save%5B0%5D%5B%5D=1&demographics_to_save%5B0%5D%5B%5D=1&demographics_to_save%5B0%5D%5B%5D=1&demographics_to_save%5B1%5D%5B%5D=0&demographics_to_save%5B1%5D%5B%5D=0&demographics_to_save%5B1%5D%5B%5D=0&demographics_to_save%5B1%5D%5B%5D=0&demographics_to_save%5B2%5D%5B%5D=0&demographics_to_save%5B2%5D%5B%5D=0&demographics_to_save%5B2%5D%5B%5D=0&demographics_to_save%5B2%5D%5B%5D=0&demographics_to_save%5B3%5D%5B%5D=0&demographics_to_save%5B3%5D%5B%5D=0&demographics_to_save%5B3%5D%5B%5D=0&demographics_to_save%5B3%5D%5B%5D=0&demographics_to_save%5B4%5D%5B%5D=0&demographics_to_save%5B4%5D%5B%5D=0&demographics_to_save%5B4%5D%5B%5D=0&demographics_to_save%5B4%5D%5B%5D=0&demographics_to_save%5B5%5D%5B%5D=0&demographics_to_save%5B5%5D%5B%5D=0&demographics_to_save%5B5%5D%5B%5D=0&demographics_to_save%5B5%5D%5B%5D=0&demographics_to_save%5B6%5D%5B%5D=0&demographics_to_save%5B6%5D%5B%5D=0&demographics_to_save%5B6%5D%5B%5D=0&demographics_to_save%5B6%5D%5B%5D=0&demographics_to_save%5B7%5D%5B%5D=0&demographics_to_save%5B7%5D%5B%5D=0&demographics_to_save%5B7%5D%5B%5D=0&demographics_to_save%5B7%5D%5B%5D=0&demographics_to_save%5B8%5D%5B%5D=0&demographics_to_save%5B8%5D%5B%5D=0&demographics_to_save%5B8%5D%5B%5D=0&demographics_to_save%5B8%5D%5B%5D=0&demographics_to_save%5B9%5D%5B%5D=0&demographics_to_save%5B9%5D%5B%5D=0&demographics_to_save%5B9%5D%5B%5D=0&demographics_to_save%5B9%5D%5B%5D=0&takeup=0.68&number_of_employees_with_dependents=2&percentMale=67&percentEmployeesWithDependents=67

Got around the issue by calling the function (via Ajax) with a change:
"demographics_to_save":JSON.stringify(demographics_to_save),
...then in the backend code (Python),
demographics_to_save = json.loads(demographics_to_save)

Related

DataprocCreateClusterOperator fails due to TypeError

EDIT 1: The problem is related to the field "initialization_actions". Originally I'd put a String there, now I gave it the object it's asking for:
from google.cloud.dataproc_v1beta2 import NodeInitializationAction
CLUSTER_CONFIG = {
...
"initialization_actions": NodeInitializationAction({
"executable_file": <string>})]
}
Unfortunately it's still complaining:
ERROR - Parameter to MergeFrom() must be instance of same class: expected google.cloud.dataproc.v1beta2.NodeInitializationAction got NodeInitializationAction.
I am trying to deploy a Dataproc cluster with airflow.providers.google.cloud.operators.dataproc.DataprocCreateClusterOperator, but I get a cryptic TypeError.
Here is the task definition:
CLUSTER_CONFIG = {
"config_bucket": <my_bucket>,
"temp_bucket": <my_bucket>,
"master_config": {
"num_instances": 1,
"machine_type_uri": "c2-standard-8",
"disk_config": {"boot_disk_type": "pd-standard", "boot_disk_size_gb": 1024},
},
"initialization_actions": [<string>],
}
create_cluster = DataprocCreateClusterOperator(
task_id="create_cluster",
project_id=PROJECT_ID,
cluster_config=CLUSTER_CONFIG,
region=REGION,
cluster_name=CLUSTER_NAME,
metadata=[("ENV", ENV)],
dag=dag)
Traceback:
Traceback (most recent call last)
File "/usr/local/lib/airflow/airflow/models/taskinstance.py", line 985, in _run_raw_tas
result = task_copy.execute(context=context
File "/usr/local/lib/airflow/airflow/providers/google/cloud/operators/dataproc.py", line 603, in execut
cluster = self._create_cluster(hook
File "/usr/local/lib/airflow/airflow/providers/google/cloud/operators/dataproc.py", line 540, in _create_cluste
metadata=self.metadata
File "/usr/local/lib/airflow/airflow/providers/google/common/hooks/base_google.py", line 425, in inner_wrappe
return func(self, *args, **kwargs
File "/usr/local/lib/airflow/airflow/providers/google/cloud/hooks/dataproc.py", line 304, in create_cluste
metadata=metadata
File "/opt/python3.6/lib/python3.6/site-packages/google/cloud/dataproc_v1beta2/services/cluster_controller/client.py", line 412, in create_cluste
request = clusters.CreateClusterRequest(request
File "/opt/python3.6/lib/python3.6/site-packages/proto/message.py", line 506, in __init_
pb_value = marshal.to_proto(pb_type, value
File "/opt/python3.6/lib/python3.6/site-packages/proto/marshal/marshal.py", line 208, in to_prot
pb_value = rule.to_proto(value
File "/opt/python3.6/lib/python3.6/site-packages/proto/marshal/rules/message.py", line 32, in to_prot
return self._descriptor(**value
TypeError: Parameter to MergeFrom() must be instance of same class: expected google.cloud.dataproc.v1beta2.NodeInitializationAction got str
The field `initialization_actions" is not a list of strings, but a list of dicts:
"initialization_actions": [{"executable_file": <string>}]

PRAW throwing Max retries exceeded with URL

I'm not very experienced at praw, so I wrote a simple program that finds the top posts in a subreddit as a test:
import praw
reddit = praw.Reddit(client_id = 'id',
client_secret = 'secret',
user_agent = 'agent')
top = reddit.subreddit('memes').top(limit=5);
for post in top:
print(post.title)
However, it always returns the exception:
Traceback (most recent call last):
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connectionpool.py", line 696, in urlopen
self._prepare_proxy(conn)
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connectionpool.py", line 964, in _prepare_proxy
conn.connect()
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connection.py", line 359, in connect
conn = self._connect_tls_proxy(hostname, conn)
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connection.py", line 496, in _connect_tls_proxy
return ssl_wrap_socket(
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\util\ssl_.py", line 432, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\util\ssl_.py", line 474, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock)
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\ssl.py", line 1040, in _create
self.do_handshake()
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1123)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\adapters.py", line 439, in send
resp = conn.urlopen(
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connectionpool.py", line 755, in urlopen
retries = retries.increment(
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\util\retry.py", line 573, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='www.reddit.com', port=443): Max retries exceeded with url: /api/v1/access_token (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1123)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\prawcore\requestor.py", line 53, in request
return self._http.request(*args, timeout=timeout, **kwargs)
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='www.reddit.com', port=443): Max retries exceeded with url: /api/v1/access_token (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1123)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\jlche\Desktop\Wörk\Discord bots\Reddit Entertainment Bot\main.py", line 9, in <module>
for post in top:
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\praw\models\listing\generator.py", line 63, in __next__
self._next_batch()
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\praw\models\listing\generator.py", line 73, in _next_batch
self._listing = self._reddit.get(self.url, params=self.params)
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\praw\reddit.py", line 530, in get
return self._objectify_request(method="GET", params=params, path=path)
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\praw\reddit.py", line 626, in _objectify_request
self.request(
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\praw\reddit.py", line 808, in request
return self._core.request(
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\prawcore\sessions.py", line 332, in request
return self._request_with_retries(
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\prawcore\sessions.py", line 252, in _request_with_retries
return self._do_retry(
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\prawcore\sessions.py", line 162, in _do_retry
return self._request_with_retries(
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\prawcore\sessions.py", line 252, in _request_with_retries
return self._do_retry(
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\prawcore\sessions.py", line 162, in _do_retry
return self._request_with_retries(
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\prawcore\sessions.py", line 227, in _request_with_retries
response, saved_exception = self._make_request(
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\prawcore\sessions.py", line 185, in _make_request
response = self._rate_limiter.call(
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\prawcore\rate_limit.py", line 35, in call
kwargs["headers"] = set_header_callback()
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\prawcore\sessions.py", line 282, in _set_header_callback
self._authorizer.refresh()
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\prawcore\auth.py", line 325, in refresh
self._request_token(grant_type="client_credentials")
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\prawcore\auth.py", line 153, in _request_token
response = self._authenticator._post(url, **data)
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\prawcore\auth.py", line 28, in _post
response = self._requestor.request(
File "C:\Users\jlche\AppData\Local\Programs\Python\Python39\lib\site-packages\prawcore\requestor.py", line 55, in request
raise RequestException(exc, args, kwargs)
prawcore.exceptions.RequestException: error with request HTTPSConnectionPool(host='www.reddit.com', port=443): Max retries exceeded with url: /api/v1/access_token (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1123)')))
I have looked at other StackOverflow quetions, but none have resolved my problem. I have ndg-httpsclient, pyopenssl and pyasn1 installed. I'm running Python 3.9 and my computer is not connected to any proxies.
How do I resolve this problem?
pip install requests==2.24.0
There maybe some problems with lastest version, especially when you use VPN

How to call async function inside a celery task

I have a web chat using websockets (AsyncWebsocketConsumer, django-channels). I'm using celery to parse a request but it halt with no debbugable (for me) errors every time I try to send the response back to the consumer.
This attempt give me the next error:
#shared_task
def execute(command, parameter, room_group_name):
if command == '/stock':
loop = asyncio.get_event_loop()
loop.run_until_complete(sendData(stock(parameter), "BOT", room_group_name))
return True
loop = asyncio.get_event_loop()
loop.run_until_complete(sendData("I do not understand that parameter", "BOT", room_group_name))
return True
from channels.layers import get_channel_layer
async def sendData(message, from_, room_group_name):
channel_layer = get_channel_layer()
import datetime
currentDT = datetime.datetime.now()
datetime = currentDT.strftime("%Y-%m-%d %H:%M:%S")
await channel_layer.group_send(
room_group_name,
{
'type': 'chat_message',
'username': from_,
'datetime': datetime,
'message': message
}
)
await asyncio.sleep(5)
Error:
[2019-05-12 18:01:15,491: ERROR/ForkPoolWorker-1] Task chat.tasks.execute[8a69afca-8173-46d0-84bc-4ee5ce7782ca] raised unexpected: OSError(9, 'Bad file descriptor')
Traceback (most recent call last):
File "/Users/juan/Documents/manu/dev/python_challenge/venv/lib/python3.6/site-packages/celery/app/trace.py", line 385, in trace_task
R = retval = fun(*args, **kwargs)
File "/Users/juan/Documents/manu/dev/python_challenge/venv/lib/python3.6/site-packages/celery/app/trace.py", line 648, in __protected_call__
return self.run(*args, **kwargs)
File "/Users/juan/Documents/manu/dev/python_challenge/chat/tasks.py", line 14, in execute
loop.run_until_complete(sendData(stock(parameter), "BOT", room_group_name))
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 455, in run_until_complete
self.run_forever()
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 422, in run_forever
self._run_once()
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 1396, in _run_once
event_list = self._selector.select(timeout)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/selectors.py", line 577, in select
kev_list = self._kqueue.control(None, max_ev, timeout)
OSError: [Errno 9] Bad file descriptor
OSError: [Errno 9] Bad file descriptor, but I cannot find where it is coming from.
celery==4.3.0

asyncio_redis and aioredis error on getting a list of keys

When I try to get values for a list of keys using asyncio_redis or aioredis, I am getting the following error. I know it is about something python socket, but unable to resolve the error. I attached both the code and error log with this issue. Here keys are a list of large byte arrays. get_params_redis is called by multiple processes. Any help would be appreciated, thanks!
async def multi_get_key_redis(keys):
redis = await aioredis.create_redis_pool(
'redis://localhost')
result =[]
for key in keys:
result.append(await redis.get(key))
# assert result == await asyncio.gather(*keys)
# return result
redis.close()
await redis.wait_closed()
print(result)
return result
def get_params_redis(shapes):
i = -1
params=[]
keys = []
for s in range(len(shapes)):
keys.append(s)
values = asyncio.get_event_loop().run_until_complete(multi_get_key_redis(keys))
for shape in shapes:
i = i + 1
param_np = pc._loads(values[i]).reshape(shape)
param_tensor = torch.nn.Parameter(torch.from_numpy(param_np))
params.append(param_tensor)
return params
Error Log:
Process Process-1:
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.6.4_4/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/local/Cellar/python/3.6.4_4/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/Users/srujithpoondla/largescaleml_project/train_redis.py", line 33, in train_redis
train_redis_epoch(epoch, args, model, train_loader, optimizer,shapes_len, loop)
File "/Users/srujithpoondla/largescaleml_project/train_redis.py", line 43, in train_redis_epoch
params = get_params_redis(shapes_len,loop)
File "/Users/srujithpoondla/largescaleml_project/common_functions.py", line 76, in get_params_redis
params = loop.run_until_complete(multi_get_key_redis(keys))
File "/usr/local/Cellar/python/3.6.4_4/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 454, in run_until_complete
self.run_forever()
File "/usr/local/Cellar/python/3.6.4_4/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 421, in run_forever
self._run_once()
File "/usr/local/Cellar/python/3.6.4_4/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 1395, in _run_once
event_list = self._selector.select(timeout)
File "/usr/local/Cellar/python/3.6.4_4/Frameworks/Python.framework/Versions/3.6/lib/python3.6/selectors.py", line 577, in select
kev_list = self._kqueue.control(None, max_ev, timeout)
OSError: [Errno 9] Bad file descriptor

gdata.docs.client.DocsClient

I have the following code, reads oauth2 token form file, then try's to perform a doc's list query to find a specific spreadsheet that I want to copy, however no matter what I try the code either errors out or returns with an object containing no document data.
I am using gdata.docs.client.DocsClient which as far as I can tell is version 3 of the API
def CreateClient():
"""Create a Documents List Client."""
client = gdata.docs.client.DocsClient(source=config.APP_NAME)
client.http_client.debug = config.DEBUG
# Authenticate the user with CLientLogin, OAuth, or AuthSub.
if os.path.exists(config.CONFIG_FILE):
f = open(config.CONFIG_FILE)
tok = pickle.load(f)
f.close()
client.auth_token = tok.auth_token
return client
1st query attempt
def get_doc():
new_api_query = gdata.docs.client.DocsQuery(title='RichSheet', title_exact=True, show_collections=True)
d = client.GetResources(q = new_api_query)
this fails with the following stack trace
Traceback (most recent call last):
File "/Users/richard/PycharmProjects/reportone/make_my_report.py", line 83, in <module>
get_doc()
File "/Users/richard/PycharmProjects/reportone/make_my_report.py", line 57, in get_doc
d = client.GetResources(q = new_api_query)
File "/Users/richard/PycharmProjects/reportone/gdata/docs/client.py", line 151, in get_resources
**kwargs)
File "/Users/richard/PycharmProjects/reportone/gdata/client.py", line 640, in get_feed
**kwargs)
File "/Users/richard/PycharmProjects/reportone/gdata/docs/client.py", line 66, in request
return super(DocsClient, self).request(method=method, uri=uri, **kwargs)
File "/Users/richard/PycharmProjects/reportone/gdata/client.py", line 267, in request
uri=uri, auth_token=auth_token, http_request=http_request, **kwargs)
File "/Users/richard/PycharmProjects/reportone/atom/client.py", line 115, in request
self.auth_token.modify_request(http_request)
File "/Users/richard/PycharmProjects/reportone/gdata/gauth.py", line 1047, in modify_request
token_secret=self.token_secret, verifier=self.verifier)
File "/Users/richard/PycharmProjects/reportone/gdata/gauth.py", line 668, in generate_hmac_signature
next, token, verifier=verifier)
File "/Users/richard/PycharmProjects/reportone/gdata/gauth.py", line 629, in build_oauth_base_string
urllib.quote(params[key], safe='~')))
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib.py", line 1266, in quote
if not s.rstrip(safe):
AttributeError: 'bool' object has no attribute 'rstrip'
Process finished with exit code 1
then my second attempt
def get_doc():
other = gdata.docs.service.DocumentQuery(text_query='RichSheet')
d = client.GetResources(q = other)
this returns an ResourceFeed object, but has no content. I have been through the source code for these function but thing are not any obvious.
Have i missed something ? or should i go back to version 2 of the api ?

Resources