Google API service account authentication can't find JSON credentials file - google-api

I can't get the Google API to find my service account's credentials. I downloaded the necessary JSON file with the right name into the proper place, and I'm using Python code straight off the API documentation:
import gspread
gc = gspread.service_account()
sh = gc.open("Example spreadsheet (I'll replace this with my actual sheet name later)")
print(sh.sheet1.get('A1'))
The code stops at gc = gspread.service_account() with a FileNotFoundError. I discovered via an error message that this is because it's looking at the complete wrong file path (I think it's thinking I'm on a Mac when I'm actually on a Windows PC??). Overriding the file name, i.e.
gc = gspread.service_account(filename="insert\actual\path\here.json"),
does not work either, which is the mystifying part. I copied that path straight out of my file explorer, doubled the backslashes so Unicode doesn't try to escape it all (that happened once), tried every modification on the file path I could think of (%APPDATA%\gspread\service_account.json instead of the whole thing, etc.) - what could be going wrong?
Edit: #mods, feel free to close the question! I found the issue, which is that I was using the Repl.it online coding environment instead of a local one. I ported everything over to IDLE and it worked fine. I strongly suspect Repl.it just couldn't access my local files at all (I also tried it on Repl.it with a random screenshot in a different place, and it threw the same error).

Related

Issues with Swashbuckle

I have a WebAPI service, written in ASP.NET (not Core), for which I am trying to generate documentation, in order to allow other devs to use it. I found Swashbuckle, and installed it. Then, since I also use OData for some of my services, I added Swashbuckle.OData. Then, I modified the CustomProvider setting in SwaggerConfig to use the ODataSwaggerProvider. I also set ResolveConflictingActions(apiDescriptions => apiDescriptions.First()) because I had a few Actions with the same URL path, differing only by query string (I'll need to address that later). So far so good.
Then, I tested it. I started my web app, then added "/swagger/" to then end. I got a message stating that it was loading the resource info. However, after several minutes, I got a browser error debug popup, stating "Error: Not enough storage is available to complete this operation." It asks if I want to debug, and if I do, it takes me to the debugger in IE (the browser I'm using). The only code in the stack is either from jquery-1-8-0-min-js or swagger-ui-min-js (this part confuses me, as there is no "swagger-ui-min-js" file in my project; I'm assuming it's embedded in the dll). There is no part of the stack trace that floats back up to my code, and all the code there is minified, so it's very difficult to debug.
However, I do know that it is at least partially working, as three of the controllers do show up in the resulting page after you close the error popup. You can navigate through them, and all the GETs, POSTs, PUTs, and DELETEs seem to be there, and you can test them.
Is it the case that whenever you navigate to the "/swagger/" url, Swagger hits all the URLs in the service, in order to generate the documentation? I'm wondering if maybe it is hitting an action that is taking a particularly long time to run, or possibly its generated documentation is taking too much disk space (I have plenty of space on my disk, but maybe it is referring to RAM?).
Anyway, even if that were not an issue, how can I get it to generate something, some kind of document file, that I can send off to someone? I see no new files added to my folders, so it would seem that it re-does the whole process every time you navigate to the swagger URL.
When I tried the Chrome browser, I no longer had the issue (I was using IE11 before). Not sure what the problem was, but this was the workaround.

Google Drive API v3 : there isn't any way to get a download url for a google document?

The Google Drive API v2 to v3 migration guide says:
The exportLinks field has been removed from files. To export Google Documents, use the files.export method instead.
I don't want to export (download) the file right away. "files.export" will actually download the file. I want a link to download the file, later. This was possible in v2 by means of the exportLinks.
How can I in v3 accomplish the same? If it is not possible, why was this useful feature removed?
Besides, (similar problem to above) downloadUrl was also removed, and the suggested alternative ("files.get with ?alt=media") downloads the file instead of providing a download link. This means there is no way in v3 to get a public short lived URL for a file?
EDIT:
there is no way in v3 to get a public short lived URL for a file?
For regular files, apparently yes.
This seems to work fine (a public short lived link to the file with its right name and contents):
https://www.googleapis.com/drive/v3/files/ID?alt=media&access_token=TOKEN
For google apps files, no (not even private, as v2 exportLinks used to be).
https://www.googleapis.com/drive/v3/files/ID/exportmimeType=TYPEv&access_token=TOKEN
Similar to regular files, this URL is a short lived link to the file contents, but lacking of its right name.
BTW, I see the API is not behaving consistently: /drive/v3/files/FILEID delivers the right file name, but /drive/v3/files/FILEID/export does not.
I think the API itself should be setting the right Content-Disposition, as it is apparently doing when issuing a /drive/v3/files/FILEID call.
This file naming problem invalidates the workaround to the lack of ExportLinks in v3.
The v2 ExportLinks allowed me to link a file (which is not the same as getting its content right away). Anyone logged in and with the proper permissions was able to access it, and the link didn't needed any access_token, and it wasn't short lived. It was good and useful.
Building a link with a raw API call like /drive/v3/files/FILEID/export (with mandatory access_token) would be an close enough workaround (it is temporary and public, not the same as it was, anyway). However, the naming problem invalidates it.
In v2, regular files have a WebContentLink and google apps files have exportLinks. In v3 exportLinks are gone, and I don't see any suitable alternative to them.
Once you query for your file by id you can use the function getWebContentLink() to get the download link of the file (eg. $file->getWebContentLink() ).
I think you're placing too much emphasis on the word "method".
There is still a link to export a file, it's https://www.googleapis.com/drive/v3/files/fileIdxxxxx/export&mimeType=xxxxx/xxxxx. Make sure you URL encode the mime type.
Eg
https://www.googleapis.com/drive/v3/files/1fGBQ81haNU_nEiC5GITZD3bxT0ppL2LHg-C0ubD4Q_s/export?mimeType=text/csv&access_token=ya29.Gmo0BMvO-pVEPKsiD9j4D-NZVGE91MChRvwOcBSg3cTHt5uAClf-jFxcovQScbO2QQhwHS95eSGW1eQQcK5G1UQ6oI4BFEJJkntEBkgriZ14GbHuvpDL7LT2pKA--WiPuNoDDIuZMm5lWtlr
These links form part of the API, so the expectation is that you've written a client that sends authenticated requests, and deals with the response data. This explains why, if you simply paste the link into a browser without an access_token, it will fail. It also explains why the filename is export, ie. it isn't intended that your client would ever use a filename, but rather it should receive the data as a stream. This SO answer discusses the situation in more detail How to set name of file downloaded from browser?

Dredd can't find my API documentation, how do i tell it where it is if it's not on my local drive (it's on apiary.io server)

I am using the Dredd tool to test my API (which resides on apiary.io).
Question
I would like to provide dredd with a path to my documentation (it even asks for it), however my API doc is on apiary.io but i don't know the exact url that points to it. What would be the correct way to provide dredd with the API path?
What did work (but not what i'm looking for)
Note: I tried downloading the api to my local drive and providing dredd with a local path to the file (yml or apib) which works fine (yay!), but i would like to avoid keeping a local copy and simply providing dredd with the location of my real API doc which is being maintained on the apiary server.
How do I do this (without first fetching the file to local drive)?
Attempts to solve this that failed
I also read (and tried) on the following topics, they may be relevant but i wasn't successful in resolving the issue
- Using authentication token as environment variable
- Providing the domain provided by apiary.io//settings to dredd
- Providing the in the dredd command
all of these attempts still produces the same result, Dredd has no idea where to find the API document unless i provide a path in my local computer to the file (which i have to download or create manually on my computer first).
Any help is appreciated, Thanks!
If I understand it correctly, you would like to use dredd and feed it using the API description document residing on Apiar.io platform, right?
If so, you should be able to do that simply calling the init command with the right options:
dredd init -r apiary -j apiaryApiKey:privateToken -j apiaryApiName:sasdasdasd
You can find the private token going into the Test section of the target API (you'll find the button on the application header).
Let me know if this solves the problem for you - I'll make sure to propagate this and document it accordingly on our help page
P.S: You can also use your own reporter - in that case, simply omit -r apiary when writing the command line parameters.
You can feed Dredd not only with a path to file on your disk, but also with an URL.
If your API in Apiary is public, the API description document (in this case API Blueprint) should have a public URL. For example, if you go to http://docs.apiblueprintapi.apiary.io/, you can see on the left there is a Download link. Unfortunately, the link is visible only for users who do not have access to the editor of the API, so you can’t see the link if you’re owner of the API. Try to log out from Apiary and the link should appear:
Then you can feed Dredd with the link:
$ dredd 'http://docs.apiblueprintapi.apiary.io/api-description-document' 'http://example.com:8080/api'
I agree this isn’t very intuitive and since you’re not the first one to come up with this, I think we’ll think of some ways how to make it easier.
If your API isn't public then unfortunately there's no way to get the URL as of now. However, you can either use GitHub Sync or Apiary CLI to get the file on your disk in an automated manner.

Play! Framework 2.1.3 pdf problems

so I am working on a school project in which we have designed a web application that takes in much user info and creates a pdf then should display that pdf to the user so they can print it off or save it. We are using Play! Framework 2.1.3 as our framework and server and Java for the server side. I create the pdf with Apache's PDFbox library. Every thing works as it should in development mode ie launching it on a localhost with plays run command. the issue is when we put it up to the server and launch with plays start command I it seems to take a snapshot of the directory (or at least the assets/public folder) which is where I am housing the output.pdf file/s (i have attempted to move the file elsewhere but that still seems to result in a 404 error). Initially I believed this to be something with liunx machine we were deploying to which was creating a caching problem and have tried many of the tricks to defeat the browser from caching the pdf
like using javascript to append on a time stamp to the filename,
using this cache-control directive in the play! documentation,
"assets.cache./public/stylesheets/output.pdf"="max-age=0",
then I tried to just save the pdf as a different filename each time and pass back the name of that file and call it directly through the file structure in the HTML
which also works fine with the run command but not the start.
finally I came to the conclusion that when the start command is issued it balls up the files so only the files that are there when the start command is issued can be seen.
I read the documentation here
http://www.playframework.com/documentation/2.1.x/Production
which then I noticed this part
When you run the start command, Play forks a new JVM and runs the
default Netty HTTP server. The standard output stream is redirected to
the Play console, so you can monitor its status.
so it looks like the fact that it forks a new JVM is what is causing my pain.
so my question really is can this be gotten around in some way that a web app can create and display a pdf form? (if I cannot get this to work my only solution
that I can see is that I will have to simulate the form with HTML and fill it out from there) --which I really think is a bad way to do this.
this seems like something that should have a solution but I cannot seem to find or come up with one please help.
i have looked here:
http://www.playframework.com/documentation/2.1.x/JavaStream
the answer may be in there but Im not getting it to work I am pretty novice with this Play! Framework still
You are trying to deliver the generated PDF file to the user by placing it in the assets directory, and putting a link to it in the HTML. This works in development mode because Play finds the assets in the directory. It won't work in production because the project is wrapped up into a jar file when you do play dist, and the contents of the jar file can't be modified by the Play application. (In dev mode, Play has a classpath entry for the directory. In production, the classpath points to the jar file).
You are on the right lines with JavaStream. The way forward is:
Generate the PDF somewhere in your local filesystem (I recommend the temp directory).
Write a new Action in your Application object that opens the file you generated, and serves it instead of a web page.
Check out the Play docs for serving files. This approach also has the advantage that you can specify the filename that the user sees. There is an overloaded function Controller.ok(File file, String filename) for doing this. (When you generate the file, you should give it a unique name, otherwise each request will overwrite the file from a previous request. But you don't want the user to see the unique name).

Launching a registered mime helper application

I used to be able to launch a locally installed helper application by registering a given mime-type in the Windows registry. This enabled me to allow users to be able to click once on a link to the current install of our internal browser application. This worked fine in Internet Explorer 5 (most of the time) and Firefox but now does not work in Internet Explorer 7.
The filename passed to my shell/open/command is not the full physical path to the downloaded install package. The path parameter I am handed by IE is
"C:\Document and Settings\chq-tomc\Local Settings\Temporary Internet Files\
EIPortal_DEV_2_0_5_4[1].expd"
This unfortunately does not resolve to the physical file when calling FileExists() or when attempting to create a TFileStream object.
The physical path is missing the Internet Explorer hidden caching sub-directory for Temporary Internet Files of "Content.IE5\ALBKHO3Q" whose absolute path would be expressed as
"C:\Document and Settings\chq-tomc\Local Settings\Temporary Internet Files\
Content.IE5\ALBKHO3Q\EIPortal_DEV_2_0_5_4[1].expd"
Yes, the sub-directories are randomly generated by IE and that should not be a concern so long as IE passes the full path to my helper application, which it unfortunately is not doing.
Installation of the mime helper application is not a concern. It is installed/updated by a global login script for all 10,000+ users worldwide. The mime helper is only invoked when the user clicks on an internal web page with a link to an installation of our Desktop browser application. That install is served back with a mime-type of "application/x-expeditors". The registration of the ".expd" / "application/x-expeditors" mime-type looks like this.
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\.expd]
#="ExpeditorsInstaller"
"Content Type"="application/x-expeditors"
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\ExpeditorsInstaller]
"EditFlags"=hex:00,00,01,00
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\ExpeditorsInstaller\shell]
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\ExpeditorsInstaller\shell\open]
#=""
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\ExpeditorsInstaller\shell\open\command]
#="\"C:\\projects\\desktop2\\WebInstaller\\WebInstaller.exe\" \"%1\""
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\MIME\Database\Content Type\application/x-expeditors]
"Extension"=".expd"
I had considered enumerating all of a user's IE cache entries but I would be concerned with how long it may take to examine them all or that I may end up finding an older cache entry before the current entry I am looking for. However, the bracketed filename suffix "[n]" may be the unique key.
I have tried wininet method GetUrlCacheEntryInfo but that requires the URL, not the virtual path handed over by IE.
My hope is that there is a Shell function that given a virtual path will hand back the physical path.
I believe the sub-directories created by IE are randomly generated, so you won't be able guarantee that it will be named the same every time, and the problem I see with the registry method is that it only works when the file is still in the cache...emptying the cache would purge the file requiring yet another installation.
Would it not be better to install this helper into application data?
I'm not sure about this but perhaps this may lead you in the right direction: try using URL cache functions from the wininet DLL: FindFirstUrlCacheEntry, FindNextUrlCacheEntry, FindCloseUrlCache for enumeration and when you locate an entry whose local file name matches the given path maybe you can use RetrieveUrlCacheEntryFile to retrieve the file.
I am using a similar system with the X-Appl browser to display WAML web applications and it works perfectly. Maybe you should have a look at how they managed to do it.
It looks like iexplore is passing the shell namespace "name" of the file rather than the filesystem name.
I dont think there is a documented way to be passed a shell item id on the command line - explorer does it to itself, but there are marshaling considerations as shell item ids are (pointers to) binary data structures that are only valid in a single process.
What I might try doing is:
1. Call SHGetDesktopFolder which will return the root IShellFolder object of the shell namespace.
2. Call the IShellFolder::ParseDisplayName to turn the name you are given back into a shell item id list.
3. Try the IShellFolder::GetDisplayNameOF with the SHGDN_FORPARSING flag - which, frankly, feels like w'eve just gone in a complete circle and are back where we started. Because I think its this API thats ultimately responsible for returning the "wrong" filesystem relative path.
Some follow-up to close out this question.
Turned out the real issue was how I was creating the file handle using TFileStream. I changed to open with fmOpenRead or fmShareDenyWrite which solved what turned out to be a file locking issue.
srcFile := TFileStream.Create(physicalFilename, fmOpenRead or fmShareDenyWrite);

Resources