Create folder in google drive with google-drive-ruby gem - ruby

I know that a similar question was asked here, however I still can't get this work since my case is a bit different.
I want to be able to create a folder in google drive by using the google-drive-ruby gem.
According to Google (https://developers.google.com/drive/folder) when using the "Drive" Api you can create a folder by inserting a file with mime-type "application/vnd.google-apps.folder"
e.g.
POST https://www.googleapis.com/drive/v2/files
Authorization: Bearer {ACCESS_TOKEN}
Content-Type: application/json
...
{
"title": "pets",
"parents": [{"id":"0ADK06pfg"}]
"mimeType": "application/vnd.google-apps.folder"
}
In my case I want to be able to do the same thing but when using the google_drive API. It has the upload_from_file option which accepts the mime-type option, however this still doesn't work for me, the best result that I got so far was when executing the following code was this error message from Google.
session.upload_from_file("test.zip", "test", :content_type => "application/vnd.google-apps.folder")
"Mime-type application/vnd.google-apps.folder is invalid. Files cannot
be created with Google mime-types.
I'll appreciate if you can give me any suggestions.

It's actually pretty straightforward. A folder in Google Drive is a GoogleDrive::Collection (http://gimite.net/doc/google-drive-ruby/GoogleDrive/Collection.html) in google-drive-ruby gem. Therefore, what you may do with google-drive-ruby is first create a file, and then add it to a collection via the GoogleDrive::Collection#add(file) method.
This also mimics the way that Google Drive actually works: upload a file to the root collection/folder, then add it to other collections/folders.
Here's some sample code, which I had written. It should work - with perhaps some minor tweaking for your specific use case - based on the context that you had provided:
# this example assumes the presence of an authenticated
# `GoogleDrive::Session` referenced as `session`
# and a file named `test.zip` in the same directory
# where this example is being executed
# upload the file and get a reference to the returned
# GoogleSpreadsheet::File instance
file = session.upload_from_file("test.zip", "test")
# get a reference to the collection/folder to which
# you want to add the file, via its folder name
folder = session.collection_by_title("my-folder-name")
# add the file to the collection/folder.
# note, that you may add a file to multiple folders
folder.add(file)
Further, if you only want to create a new folder, without putting any files in it, then just add it to the root collection:
session.root_collection.create_subcollection("my-folder-name")

Related

Google Drive API v3 : there isn't any way to get a download url for a google document?

The Google Drive API v2 to v3 migration guide says:
The exportLinks field has been removed from files. To export Google Documents, use the files.export method instead.
I don't want to export (download) the file right away. "files.export" will actually download the file. I want a link to download the file, later. This was possible in v2 by means of the exportLinks.
How can I in v3 accomplish the same? If it is not possible, why was this useful feature removed?
Besides, (similar problem to above) downloadUrl was also removed, and the suggested alternative ("files.get with ?alt=media") downloads the file instead of providing a download link. This means there is no way in v3 to get a public short lived URL for a file?
EDIT:
there is no way in v3 to get a public short lived URL for a file?
For regular files, apparently yes.
This seems to work fine (a public short lived link to the file with its right name and contents):
https://www.googleapis.com/drive/v3/files/ID?alt=media&access_token=TOKEN
For google apps files, no (not even private, as v2 exportLinks used to be).
https://www.googleapis.com/drive/v3/files/ID/exportmimeType=TYPEv&access_token=TOKEN
Similar to regular files, this URL is a short lived link to the file contents, but lacking of its right name.
BTW, I see the API is not behaving consistently: /drive/v3/files/FILEID delivers the right file name, but /drive/v3/files/FILEID/export does not.
I think the API itself should be setting the right Content-Disposition, as it is apparently doing when issuing a /drive/v3/files/FILEID call.
This file naming problem invalidates the workaround to the lack of ExportLinks in v3.
The v2 ExportLinks allowed me to link a file (which is not the same as getting its content right away). Anyone logged in and with the proper permissions was able to access it, and the link didn't needed any access_token, and it wasn't short lived. It was good and useful.
Building a link with a raw API call like /drive/v3/files/FILEID/export (with mandatory access_token) would be an close enough workaround (it is temporary and public, not the same as it was, anyway). However, the naming problem invalidates it.
In v2, regular files have a WebContentLink and google apps files have exportLinks. In v3 exportLinks are gone, and I don't see any suitable alternative to them.
Once you query for your file by id you can use the function getWebContentLink() to get the download link of the file (eg. $file->getWebContentLink() ).
I think you're placing too much emphasis on the word "method".
There is still a link to export a file, it's https://www.googleapis.com/drive/v3/files/fileIdxxxxx/export&mimeType=xxxxx/xxxxx. Make sure you URL encode the mime type.
Eg
https://www.googleapis.com/drive/v3/files/1fGBQ81haNU_nEiC5GITZD3bxT0ppL2LHg-C0ubD4Q_s/export?mimeType=text/csv&access_token=ya29.Gmo0BMvO-pVEPKsiD9j4D-NZVGE91MChRvwOcBSg3cTHt5uAClf-jFxcovQScbO2QQhwHS95eSGW1eQQcK5G1UQ6oI4BFEJJkntEBkgriZ14GbHuvpDL7LT2pKA--WiPuNoDDIuZMm5lWtlr
These links form part of the API, so the expectation is that you've written a client that sends authenticated requests, and deals with the response data. This explains why, if you simply paste the link into a browser without an access_token, it will fail. It also explains why the filename is export, ie. it isn't intended that your client would ever use a filename, but rather it should receive the data as a stream. This SO answer discusses the situation in more detail How to set name of file downloaded from browser?

Parse "parse new" doesn't create configuration folder as expected

I am following this tutorial to use parse.com hosting,
https://parse.com/apps/quickstart#hosting/windows
It says a config folder with json file will be created with "parse new", but it doesn't, I only get the public and cloud folders. Not sure what's going wrong here. Anyone know where I can find a copy of the file and put into my folder, so I can configure it's public URL?
The guide says :
The config directory contains a JSON configuration file that you shouldn't normally need to deal with
And then afterwards says
In the 'Hosting' section of your app's settings, you'll see a field at the top that allows you to set your subdomain, e.g. your-custom-domain.parseapp.com.
Perhaps that is what you are looking for?

How to use the Gooddata API to upload graphs?

I'm attempting to upload graphs made/edited in Cloud Connect to GoodData via the api. I have been trying to use this call: http://docs.gooddata.apiary.io/#cloudconnectprocesses
The actual call I'm making has the json {"process": {"path": "/uploads/Bonobos_v6-1.grf", "name": "Bonobos Prod"}}
However, when I try to run this, it fails with
{
"error": {
"errorClass": "com.gooddata.msf.processes.InvalidProcessException",
"trace": "",
"message": "Can not read from file \"/uploads/Bonobos_v6-1.grf\"",
"component": "MSF",
"errorId": "83090caa-31c9-4ce2-bb79-040d5c4d2421",
"errorCode": "gdc1151",
"parameters": []
}
}
Is there a specific way of creating a "process" that then needs to get uploaded to the server? I've tried both zip files of multiple graphs and individual .grf files, but to no avail. I'm also assuming that the error does not mean that GoodData can't see the file, but that would certainly explain some things.
First of all you have to check where is your project located(na1 or secure). If your project resides on na1 follow this procedure:
zip your CloudConnect project (it doesn't matter whether you zip whole folder or just its content)
upload zip file to webdav - na1-di.gooddata.com/uploads using curl curl -k -T zippedCcProject.zip https://my_login%40company.com:my_password#na1-di.gooddata.com/uploads/zippedCcProject.zip
open browser and go to the processes rest resource https://na1.secure.gooddata.com/gdc/projects/{projectId}/dataload/processes/ and fill proper attributes (type=GRAPH, name=myCloudConnectProject, path=/uploads/zippedCcProject.zip) and hit 'create the process'
Before calling this API you have to upload the packed all files in your CloudConnect project and PUT them on the server. Have you done this?
So the whole process will be:
ZIP archive all files (i.e. workspace.prm) and folders (graphs,meta,trans,...) from CloudConnect Project folder (please do not add data folder if there is a bigger volume of data, store them in external location then)
PUT them on the webdav server (example is na1-di.gooddata.com/uploads/...)
Call the API to Deploy it (the path will be "/uploads/your-folder/name-of-the-archive")
Remember: If you have your Project on https://secure.gooddata.com your webdav server is https://secure-di.gooddata.com/uploads/ if your project is on the https://na1.gooddata.com you have to use https://na1-di.gooddata.com/uploads/
Let me know if this helps you. We need to clarify this info in API docs anyway.
Thanks!
As example on how to PUT the file to the webdav server you can use following request:
curl -i -v -X PUT --data-binary #project.zip https://username%40company.com:PASSWORD#na1-di.gooddata.com/uploads/project.zip
You can check if the file is in place by accessing it via web browser. Then you can call the API as specified above.

DocsList findFolder() issue

This is a google spreadsheet script question.
I have a GUI setup in order to search for "SouthWest" and then find a "test" sheet. This is the code I am using.
var file = DocsList.getFolder("SouthWest").find("test");
This works just fine when I run it under my account (as I have this folder and file setup correctly) but when another user is logged into google docs it will attempt to search for this folder/file under the new user instead of the owner of the document. Is there a way to have it just search the DocsList of the owner of the spreadsheet that is currently open? The error that I get under the new user is "Error encountered: Cannot find folder SouthWest." Thanks.
If you always want to access the same file, you can use the getFileById method and address it directly instead of searching every time:
https://developers.google.com/apps-script/class_docslist#getFileById
Of course, you should make sure that all users are allowed to access that file.

Replacing the body of a proxied subrequest with the contents of a file

I'm using the upload module to write the uploaded file to disk as soon as it arrives in nginx. In addition, I'd like to create 2 subrequests:
POST to a URL containing the uploaded file
POST to another URL without the uploaded file
The second request is easy to do because the upload module has already stripped out the upload. My problem is with the first request: How do I get the uploaded file back into the the subrequest.
A solution for my question has been committed to the echo module.
The module you linked to has the upload_set_form_field directive and a few special variables (listed in that directive), which you can use to pass the file details to the backend as a POST variables. The example given appears to put the upload back in the POST data. Can you adapt your backend script to make that work?

Resources