I'm attempting to upload graphs made/edited in Cloud Connect to GoodData via the api. I have been trying to use this call: http://docs.gooddata.apiary.io/#cloudconnectprocesses
The actual call I'm making has the json {"process": {"path": "/uploads/Bonobos_v6-1.grf", "name": "Bonobos Prod"}}
However, when I try to run this, it fails with
{
"error": {
"errorClass": "com.gooddata.msf.processes.InvalidProcessException",
"trace": "",
"message": "Can not read from file \"/uploads/Bonobos_v6-1.grf\"",
"component": "MSF",
"errorId": "83090caa-31c9-4ce2-bb79-040d5c4d2421",
"errorCode": "gdc1151",
"parameters": []
}
}
Is there a specific way of creating a "process" that then needs to get uploaded to the server? I've tried both zip files of multiple graphs and individual .grf files, but to no avail. I'm also assuming that the error does not mean that GoodData can't see the file, but that would certainly explain some things.
First of all you have to check where is your project located(na1 or secure). If your project resides on na1 follow this procedure:
zip your CloudConnect project (it doesn't matter whether you zip whole folder or just its content)
upload zip file to webdav - na1-di.gooddata.com/uploads using curl curl -k -T zippedCcProject.zip https://my_login%40company.com:my_password#na1-di.gooddata.com/uploads/zippedCcProject.zip
open browser and go to the processes rest resource https://na1.secure.gooddata.com/gdc/projects/{projectId}/dataload/processes/ and fill proper attributes (type=GRAPH, name=myCloudConnectProject, path=/uploads/zippedCcProject.zip) and hit 'create the process'
Before calling this API you have to upload the packed all files in your CloudConnect project and PUT them on the server. Have you done this?
So the whole process will be:
ZIP archive all files (i.e. workspace.prm) and folders (graphs,meta,trans,...) from CloudConnect Project folder (please do not add data folder if there is a bigger volume of data, store them in external location then)
PUT them on the webdav server (example is na1-di.gooddata.com/uploads/...)
Call the API to Deploy it (the path will be "/uploads/your-folder/name-of-the-archive")
Remember: If you have your Project on https://secure.gooddata.com your webdav server is https://secure-di.gooddata.com/uploads/ if your project is on the https://na1.gooddata.com you have to use https://na1-di.gooddata.com/uploads/
Let me know if this helps you. We need to clarify this info in API docs anyway.
Thanks!
As example on how to PUT the file to the webdav server you can use following request:
curl -i -v -X PUT --data-binary #project.zip https://username%40company.com:PASSWORD#na1-di.gooddata.com/uploads/project.zip
You can check if the file is in place by accessing it via web browser. Then you can call the API as specified above.
Related
I have developed a small web application that runs a web server in golang.
Each user can login, view the list of their docs (previously uploaded) and click on an item to view an html page that shows some fields of the document plus an tag with a src attribute
The src attribute includes an url like "mydocuments/download/123-456-789.pdf"
On the server side I handle the URL ("mydocuments/download/*") via an http Handler
mymux.HandleFunc(pat.Get("/mydocuments/download/:docname"), DocDownloadHandler)
where:
I check that the user has the rights to view the document in the url
Then I create a fileserver that obviously re-maps the url to the real path of the folder where the files are stored on the filesystem of the server
fileServer := http.StripPrefix("/mydocs/download/",http.FileServer(http.Dir("/the-real-path-to-documents-folder/user-specific-folder/)))
and of course I serve the files
fileServer.ServeHTTP(w, r)
IMPORTANT: the directory where the documents are stored is not the static-files directory I sue for the website but a directory where all files end after being uploaded by users.
My QUESTION
As I am trying to convert the code for it to work also on Google Cloud, I am trying to change the code so that files are stored in a bucket (or, better in "sub-directories" -as they do not properly exist- of a bucket).
How can I modify the code so to map the real document url as available via the cloud storage bucket?
Can I still use the http.FileServer technique above (if so what should I use instead of http.Dir to map the bucket "sub-folder" path where the documents are stored)?
I hope I was enough clear to explain my issue...
I apologise in advance for any unclear point...
Some options are:
Give the user direct access to the resource using a signed URL.
Write code to proxy the request to GCS.
Use http.FS with an fs.FS backed by GCS.
It's possible that a fs.FS for GCS already exists, but you may need to write one.
You can use http.FileSystem since it is an interface and can be implemented however you like.
So I'm pretty new at all this. I am trying to reverse engineer a web application.
When I submit a form, it sends a POST with a request payload that looks something similar to this:
encoding=UTF8&zip=1&size=136240&html=DwQgIg_a_whole_lot_more_gibberish_not_worth_posting
Anyways, from inspecting the captured traffic from Chrome developer tools, I noticed it is encoded and sent as a zipped up html?
How would I go about reversing this to see what the content is actually being sent to the server?
What you want to do is this:
1) Get the name of the zip file
2) Get the path of the zip file (likely the root directory or the current path the form is at)
3) Generate the URL (http://site_name.com/path/to/folder/zip_file.zip)
4) Download it using a too such as wget (typing the URL into the browser may work too)
I used this technique to download all the files that get downloaded to the OTA updates on iOS devices (used burp suit to intercept the zip file name where the server was on my computer which my iDevice was connected to).
Please note: the name of the zip file you have given does not end in .zip. this may mean it doesn't have a extension; you may have to add .zip to the file manually; or it may have another ending such as .tar, .tar.gz etc.
I am following this tutorial to use parse.com hosting,
https://parse.com/apps/quickstart#hosting/windows
It says a config folder with json file will be created with "parse new", but it doesn't, I only get the public and cloud folders. Not sure what's going wrong here. Anyone know where I can find a copy of the file and put into my folder, so I can configure it's public URL?
The guide says :
The config directory contains a JSON configuration file that you shouldn't normally need to deal with
And then afterwards says
In the 'Hosting' section of your app's settings, you'll see a field at the top that allows you to set your subdomain, e.g. your-custom-domain.parseapp.com.
Perhaps that is what you are looking for?
Using the hosted Team Foundation Service at tfs.visualstudio.com, one has the option in a Build Definition to "Copy build output to the server" which creates a zip of the drop folder that can be downloaded over https via team web access. I really need to download this drop automatically, so I can chain input to the next stage in my build pipeline.
Unfortunately, the drop URL is not obvious, but can be created using the TfsDropDownloader.
TL;DR - I can't get the TfsDropDownloader to work, I'm hoping someone else has used this tool or a similar method to succesfully download a drop from https://tfs.visualstudio.com
Using the command line TfsDropDownloader.exe I can do this:
TfsDropDownloader.exe /c:"https://MYPROJECTNAME.visualstudio.com/DefaultCollection" /t:"ProjectName" /b:"BuildDefinitionName" /u:username /p:password
...and get an empty zip file with the correct build label name of the last successful build e.g. BuildDefinitionName_20130611.1.zip
Running the source code in the debugger, this is because the URL that is generated for downloading:
https://tflonline.visualstudio.com/DefaultCollection/_apis/resources/containers/804/drop/BuildDefinitionName_20130611.1.zip
..returns a content type of application/json, which is unsupported. This exception is swallowed by the application, but not before the empty zip file is created.
Is it possible the REST API on Team Foundation Service has changed in some way so the generated URL is no longer correct?
Note that I am using the "alternate credentials" defined on my Team Foundation Service account (i.e. not my live ID) - using anything else gets me TF30063: not authorized.
I got it working by using alternate credentials, but I also had to access the REST API via a different path.
The current TfsDropDownloader builds a URL that looks like this:
https://project.visualstudio.com/DefaultCollection/_apis/resources/containers/804/drop/BuildDefinitionName_20130611.1.zip
This returns empty JSON whenever I try to use it. I'm definitely authenticated, because if I tweak the URL to:
https://project.visualstudio.com/DefaultCollection/_apis/resources/containers/804/drop
I get a nice JSON listing of every single file in the drop, but no zip.
From spying on the SSL traffic to https://tfs.visualstudio.com with Fiddler I saw that clicking the "Download drop as zip" link I can see that there is another endpoint at:
https://project.visualstudio.com/DefaultCollection/ProjectName/_api/_build/ItemContent?buildUri=vstfs%3a%2f%2f%2fBuild%2fBuild%2f639&path=%2Fdrop
...which does give you a zip. The "vstfs%3a%2f%2f%2fBuild%2fBuild%2f639" portion is the URL encoded BuildUri.
So I've changed my version of GetServerPath in the TfsDropDownloader source to do this:
private static string GetServerPath(TfsConnection collection, IBuildDetail buildDetail)
{
var downloadPath = string.Format("{0}{1}/_api/_build/ItemContent?buildUri={2}&path=%2Fdrop",
collection.Uri,
HttpUtility.UrlPathEncode(buildDetail.TeamProject),
HttpUtility.UrlEncode(buildDetail.Uri.ToString()));
return downloadPath;
}
This works for me for the time being. Hopefully this helps someone else with the same problem!
I know that a similar question was asked here, however I still can't get this work since my case is a bit different.
I want to be able to create a folder in google drive by using the google-drive-ruby gem.
According to Google (https://developers.google.com/drive/folder) when using the "Drive" Api you can create a folder by inserting a file with mime-type "application/vnd.google-apps.folder"
e.g.
POST https://www.googleapis.com/drive/v2/files
Authorization: Bearer {ACCESS_TOKEN}
Content-Type: application/json
...
{
"title": "pets",
"parents": [{"id":"0ADK06pfg"}]
"mimeType": "application/vnd.google-apps.folder"
}
In my case I want to be able to do the same thing but when using the google_drive API. It has the upload_from_file option which accepts the mime-type option, however this still doesn't work for me, the best result that I got so far was when executing the following code was this error message from Google.
session.upload_from_file("test.zip", "test", :content_type => "application/vnd.google-apps.folder")
"Mime-type application/vnd.google-apps.folder is invalid. Files cannot
be created with Google mime-types.
I'll appreciate if you can give me any suggestions.
It's actually pretty straightforward. A folder in Google Drive is a GoogleDrive::Collection (http://gimite.net/doc/google-drive-ruby/GoogleDrive/Collection.html) in google-drive-ruby gem. Therefore, what you may do with google-drive-ruby is first create a file, and then add it to a collection via the GoogleDrive::Collection#add(file) method.
This also mimics the way that Google Drive actually works: upload a file to the root collection/folder, then add it to other collections/folders.
Here's some sample code, which I had written. It should work - with perhaps some minor tweaking for your specific use case - based on the context that you had provided:
# this example assumes the presence of an authenticated
# `GoogleDrive::Session` referenced as `session`
# and a file named `test.zip` in the same directory
# where this example is being executed
# upload the file and get a reference to the returned
# GoogleSpreadsheet::File instance
file = session.upload_from_file("test.zip", "test")
# get a reference to the collection/folder to which
# you want to add the file, via its folder name
folder = session.collection_by_title("my-folder-name")
# add the file to the collection/folder.
# note, that you may add a file to multiple folders
folder.add(file)
Further, if you only want to create a new folder, without putting any files in it, then just add it to the root collection:
session.root_collection.create_subcollection("my-folder-name")