AMO CI always gives Duplicate add-on ID found - firefox

I'm currently trying to automate updates of my web extension in my build pipeline with this API / endpoint https://addons.mozilla.org/api/v3/addons.
So the actual command that I'm using looks like this:
curl "https://addons.mozilla.org/api/v3/addons/" -g -XPOST --form "upload=#dist/firefox/psono.firefox.PW.zip" -F "version=1.0.17" -H "Authorization: JWT ABCDEFG..."
(documentation here http://addons-server.readthedocs.io/en/latest/topics/api/signing.html#uploading-without-an-id )
I'm now where I always get (after a lot of tries and errors with the authentication):
{"error":"Duplicate add-on ID found."}
I have in my manifest this:
"manifest_version": 2,
"name": "psono.PW",
"description": "Psono Password Manager",
"version": "1.0.17",
... alot of other stuff ...
"applications": {
"gecko": {
"id": "{3dce78ca-2a07-4017-9111-998d4f826625}"
}
}
If I remove this "applications" attribute, then it passes, but it creates a new extension instead of updating the existing one. I already diffed the manifest of my existing extension and my new one, and besides some formatting of the JSON and the obvious difference of the version attribute, they look identical.
What am I missing, that the AMO API cannot actually match my update with my existing extension?

While I have not tested it, you are clearly sending the request to the URL which is for WebExtensions without an ID rather than the URL which is for uploading a new version of your add-on with an ID. AMO uses add-on IDs to match the add-on to the currently existing one. The only time a WebExtension does not have an ID is the first time you upload a new extension to AMO (and you have chosen not to assign an ID yourself during development).
After you have uploaded your add-on to AMO for the first time and the WebExtension is listed, it has an ID. Thus, I would assume that the documentation is just not 100% clear that uploading without an ID is only for uploading a new WebExtension add-on. The only thing that makes me think that the URL you are using might be intended for WebExtensions with IDs is the error messages which are claimed to be possible, but that list of errors may just be a copy-&-paste from another section.
Uploading a new WebExtensions add-on (without ID):
curl "https://addons.mozilla.org/api/v3/addons/"
-g -XPOST -F "upload=#build/my-addon.xpi" -F "version=1.0"
-H "Authorization: JWT <jwt-token>"
Uploading a new version of an add-on (with ID):
curl "https://addons.mozilla.org/api/v3/addons/#my-addon/versions/1.0/"
-g -XPUT --form "upload=#build/my-addon.xpi"
-H "Authorization: JWT <jwt-token>"
e.g in your your case:
curl "https://addons.mozilla.org/api/v3/addons/{3dce78ca-2a07-4017-9111-998d4f826625}/versions/1.0.17/"
-g -XPUT --form "upload=#dist/firefox/psono.firefox.PW.zip"
-H "Authorization: JWT ABCDEFG..."
I would consider this to be a problem in the documentation for the upload process where the description of the actual use of the WebExtensions-no-ID URL should be clarified.
Please test this to verify that you can upload a new version (including the applications in your manifest.json) using the normal URL for uploading a version. If you confirm that works, I'll submit a pull request to the documentation to make it more clear.
Note: The documentation on MDN, when talking about updating your add-on, states:
It's essential with this workflow that you update the add-on manually using its page on AMO, otherwise AMO will not understand that the submission is an update to an existing add-on, and will treat the update as a brand-new add-on.
However, it should be noted that even that section is talking about updating an add-on without an ID specified in the applications key. Thus, even that documentation is not 100% clear.
Alternate possibilities
If the URL you are using is actually intended for both new WebExtensions without an ID and new versions of already existing WebExtensions with an ID:
You have already uploaded a version 1.0.17 of your add-on. If so, there's a bug which returns the "Duplicate add-on ID found." error rather than having the error text explain that a duplicate version exists for this ID. If this is the case, then I would consider there to also be clarification needed in the documentation that URL is for both WebExtensions with and without an ID: that stating it's without an ID is only with respect to the URL used and parameters passed to the API.
You have not already uploaded a version 1.0.17 of your add-on. If so, there's a bug which does not allow uploading of new versions of an already existing add-on ID. A simple resolution of this is to just declare that this is the intended operation and to change the docuentation to make it clear that the "with ID" URL/PUT should be used instead of the new-add-on, "without ID" URL/POST.

Related

Dredd can't find my API documentation, how do i tell it where it is if it's not on my local drive (it's on apiary.io server)

I am using the Dredd tool to test my API (which resides on apiary.io).
Question
I would like to provide dredd with a path to my documentation (it even asks for it), however my API doc is on apiary.io but i don't know the exact url that points to it. What would be the correct way to provide dredd with the API path?
What did work (but not what i'm looking for)
Note: I tried downloading the api to my local drive and providing dredd with a local path to the file (yml or apib) which works fine (yay!), but i would like to avoid keeping a local copy and simply providing dredd with the location of my real API doc which is being maintained on the apiary server.
How do I do this (without first fetching the file to local drive)?
Attempts to solve this that failed
I also read (and tried) on the following topics, they may be relevant but i wasn't successful in resolving the issue
- Using authentication token as environment variable
- Providing the domain provided by apiary.io//settings to dredd
- Providing the in the dredd command
all of these attempts still produces the same result, Dredd has no idea where to find the API document unless i provide a path in my local computer to the file (which i have to download or create manually on my computer first).
Any help is appreciated, Thanks!
If I understand it correctly, you would like to use dredd and feed it using the API description document residing on Apiar.io platform, right?
If so, you should be able to do that simply calling the init command with the right options:
dredd init -r apiary -j apiaryApiKey:privateToken -j apiaryApiName:sasdasdasd
You can find the private token going into the Test section of the target API (you'll find the button on the application header).
Let me know if this solves the problem for you - I'll make sure to propagate this and document it accordingly on our help page
P.S: You can also use your own reporter - in that case, simply omit -r apiary when writing the command line parameters.
You can feed Dredd not only with a path to file on your disk, but also with an URL.
If your API in Apiary is public, the API description document (in this case API Blueprint) should have a public URL. For example, if you go to http://docs.apiblueprintapi.apiary.io/, you can see on the left there is a Download link. Unfortunately, the link is visible only for users who do not have access to the editor of the API, so you can’t see the link if you’re owner of the API. Try to log out from Apiary and the link should appear:
Then you can feed Dredd with the link:
$ dredd 'http://docs.apiblueprintapi.apiary.io/api-description-document' 'http://example.com:8080/api'
I agree this isn’t very intuitive and since you’re not the first one to come up with this, I think we’ll think of some ways how to make it easier.
If your API isn't public then unfortunately there's no way to get the URL as of now. However, you can either use GitHub Sync or Apiary CLI to get the file on your disk in an automated manner.

Tyring to update a URL using Curl on Linux

I am trying to update an ticket and I am trying to do that through Linux. Not sure whether it can be done or not, in the way of searching, I found one blogger sharing, where he/she was also trying to update something
He/She has used the below commands to update the URL.
curl -i -u "script-user:password" -X PUT -d "short_description=Update+me" https:/0005000972
I suspect he/she is trying to update the URL with "Update me".
Is that right? or else Can some one explain what that blogger tried to do??
This is a RESTful request. That is a request using standard HTTP methods (GET, POST, PUT, DELETE) to query a server.
-X PUT specifies the method used
-d "...." specifies the data send to the server
https://... (ill formed in your example) is of course the URL of the target server
Usually, the PUT method is used to replace an existing attribute/value on the server. As the concrete parameters and/or methods available are service dependent, I can only guess here that the intend is to update some attribute named short_description to store the value Update Me (URL encoded -- or more formally x-www-form-urlencoded)
Maybe you should first read a little bit more about those topics, and then, if necessary post an other question describing more in details both the target server and the goal you're trying to achieve on it.

Download build drop from hosted Team Foundation Service

Using the hosted Team Foundation Service at tfs.visualstudio.com, one has the option in a Build Definition to "Copy build output to the server" which creates a zip of the drop folder that can be downloaded over https via team web access. I really need to download this drop automatically, so I can chain input to the next stage in my build pipeline.
Unfortunately, the drop URL is not obvious, but can be created using the TfsDropDownloader.
TL;DR - I can't get the TfsDropDownloader to work, I'm hoping someone else has used this tool or a similar method to succesfully download a drop from https://tfs.visualstudio.com
Using the command line TfsDropDownloader.exe I can do this:
TfsDropDownloader.exe /c:"https://MYPROJECTNAME.visualstudio.com/DefaultCollection" /t:"ProjectName" /b:"BuildDefinitionName" /u:username /p:password
...and get an empty zip file with the correct build label name of the last successful build e.g. BuildDefinitionName_20130611.1.zip
Running the source code in the debugger, this is because the URL that is generated for downloading:
https://tflonline.visualstudio.com/DefaultCollection/_apis/resources/containers/804/drop/BuildDefinitionName_20130611.1.zip
..returns a content type of application/json, which is unsupported. This exception is swallowed by the application, but not before the empty zip file is created.
Is it possible the REST API on Team Foundation Service has changed in some way so the generated URL is no longer correct?
Note that I am using the "alternate credentials" defined on my Team Foundation Service account (i.e. not my live ID) - using anything else gets me TF30063: not authorized.
I got it working by using alternate credentials, but I also had to access the REST API via a different path.
The current TfsDropDownloader builds a URL that looks like this:
https://project.visualstudio.com/DefaultCollection/_apis/resources/containers/804/drop/BuildDefinitionName_20130611.1.zip
This returns empty JSON whenever I try to use it. I'm definitely authenticated, because if I tweak the URL to:
https://project.visualstudio.com/DefaultCollection/_apis/resources/containers/804/drop
I get a nice JSON listing of every single file in the drop, but no zip.
From spying on the SSL traffic to https://tfs.visualstudio.com with Fiddler I saw that clicking the "Download drop as zip" link I can see that there is another endpoint at:
https://project.visualstudio.com/DefaultCollection/ProjectName/_api/_build/ItemContent?buildUri=vstfs%3a%2f%2f%2fBuild%2fBuild%2f639&path=%2Fdrop
...which does give you a zip. The "vstfs%3a%2f%2f%2fBuild%2fBuild%2f639" portion is the URL encoded BuildUri.
So I've changed my version of GetServerPath in the TfsDropDownloader source to do this:
private static string GetServerPath(TfsConnection collection, IBuildDetail buildDetail)
{
var downloadPath = string.Format("{0}{1}/_api/_build/ItemContent?buildUri={2}&path=%2Fdrop",
collection.Uri,
HttpUtility.UrlPathEncode(buildDetail.TeamProject),
HttpUtility.UrlEncode(buildDetail.Uri.ToString()));
return downloadPath;
}
This works for me for the time being. Hopefully this helps someone else with the same problem!

Disabling the large file notification from google drive

While downloading zip file(more than 25MB i assume) i am getting the below notification,
Sorry, we are unable to scan this file for viruses.
The file exceeds the maximum size that we scan. Download anyway
in the browser.Is there any option to disable it,so that i can download any large file directly without having to receive such messages as interruption.
Want to know whether any setting is there in the google drive, so that i can disable that broker message.
After spending many countless hours trying to get a direct download link that bypasses the virus scan I finally figured it out by accident. A URL in the format below along with your Google API key will bypass the virus scan. I could not find this documented anywhere (here is the official doc) so use at your own risk as future updates might break it. https://www.googleapis.com/drive/v3/files/fileid/?key=yourapikey&alt=media
You can also use the authorization access token from google oauth instead of the apikey.
Obsolete: this approach no longer works after August 31, 2016
I found that the following pattern allows to disable the large file notification:
https://googledrive.com/host/file-id
I think anyone knows how to find the file-id for Google Drive file.
Please keep in mind that this method works only if file is shared with "Public on the web" option.
But this feature is deprecated and will stop working after August 31, 2016: http://googleappsdeveloper.blogspot.com/2015/08/deprecating-web-hosting-support-in.html
I don't believe there is any longer a way to bypass Google Drive's warning that a file is too big to be scanned for viruses.
At one time Google Drive had web hosting support via webViewLink URLs that look like googledrive.com/host/[doc id]. But that feature is deprecated and will stop working after August 31, 2016.
http://googleappsdeveloper.blogspot.com/2015/08/deprecating-web-hosting-support-in.html
https://developers.google.com/drive/v2/web/publish-site
use zip-extractor to download file and extract to your Google Drive. It's offered as an option at the website you found the link at. You have to be logged in at Google Drive for this to work.
A better alternative is to use a Google Cloud Bucket.
Open the Cloud Storage browser in the Google Cloud Platform Console.
Create bucket that is Multi-Regional or Regional based on your need.
Upload the file o the bucket and make it public.
Use the public url to download the file using scripts, browser, gcloud commands, wget or curl.
This will work for any file size. Cloud Bucket service is really cheap. Google gives you free credits to start out and the bucket use can be well covered under the free credits.
WGET
export fileid=<file-id>
export filename=combian.rar
wget --save-cookies cookies.txt 'https://docs.google.com/uc?export=download&id='$fileid -O- \
| sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1/p' > confirm.txt
wget --load-cookies cookies.txt -O $filename \
'https://docs.google.com/uc?export=download&id='$fileid'&confirm='$(<confirm.txt)
You can bypass preview, virus scan, AND get the correct file name if you simply use the following link convention:
https://drive.google.com/uc?export=download&confirm=yTib&id=FILE_ID
where the google drive file id replaces FILE_ID.
Thanks,
Mick
Well you can try "Download anyway"
If that doesn't work I have found with Gmail that by changing the extension to .txt it allows the file to be downloaded/transferred then when downloaded you can change it back to .zip
The way to go is to use the link format Mick suggested:
https://drive.google.com/uc?export=download&confirm=CONFIRM_CODE&id=FILE_ID
The confirm Code ist valid indefinitely for each file. It does not change over time. So far the only way to find out the confirm code (or for that matter the complete link) is to curl https://drive.google.com/uc?export=download&id=FILE_ID and then extract the link under the "download anyway" button, which includes the confirm code.
Cheers.
you must disable virus scan from Chrome, but it will disable virus scan from all activities with the browser. In Chrome Preferences, show Advanced Settings and uncheck "Enable phishing and malware protection".

Google static map API getting 403 forbidden when loading from img tag

What I have is a Google map that shows the location of a property but when I come to print the dynamic maps dont print so good so I decided to implement the Google Static Map image API.
http://lpoc.co.uk/properties-for-sale/property/oldgate-dairy-st-james-road-long-sutton-cambridgeshire-pe12/?prop-print=1
^^ is an example of a property in print view and should show a static map image but it fails to load and looking at my inspector I'm getting a 403 Forbiden response for the image.
But if I go to the URL directly the image loads...
What am I doing wrong?
Thanks
Scott
This has gotten quite a lot of views, so I'm adding my solution to the problem here:
When using the new API, make sure you generate a Key for browser apps (with referers) and also make sure the patterns match your URL.
E.g. when requesting from example.com your pattern should be
example.com/*
When you're requesting from www.example.com:
*.example.com/*
So make sure you check whether a subdomain is present and allow both patterns in the developer console.
Visit the Developer Console.
Under API Keys, click the pencil icon to edit.
Under "Key restrictions", ensure that you have an entry for example.com/*, *.example.com/*, and any local testing domains you might want.
There seems to be some confusion here, and since this thread is highly ranked on Google, it seems relevant to clarify.
Google has a couple of different API's to use for their maps service:
Javascript API
The old version of this API was version 2, which required a key. This version is deprecated, and it is recommended to upgrade to the newer version 3. Note that the documentation still states that you need a key for this to function, except if you're using "Google Maps API for Business".
Static Maps API
This is a whole different story. Static maps is a service that does not require any javascript. You simply call an url, and Google will return a maps image, making it possible to insert the URL directly into your <img> tag.
The newest version is version 2, and this requires a key to function because a usage limit is applied.
A key can be requested here:
https://code.google.com/apis/console
And the key should be added to the request for the correct image to be generated:
http://maps.googleapis.com/maps/api/staticmap?center=New+York,NY&zoom=13&size=600x300&key=API_console_key
I hope this clears up some confusion.
I had this same problem but my solution was different. I had the V2 maps api enabled, but not the static maps api (I thought this was V2). I enabled the static maps api and it worked.
Oops I feel like such an idiot. I was using the old V2 maps API URL and not the new V3 API URL. I was getting a 403 because I was using the V2 URL without providing an API key :(
Be hundred percent sure of these points: (for static maps)
Enable your project at this url :
https://console.developers.google.com/apis/api/static_maps_backend/overview?project=
You have your localhost, staging and production - all urls with wildcards enabled in the referrer section.
Google has changed its policy and you now need an api key to display maps. refer this for more : Google Maps API without key?
Hope it helps.
Staticmaps V3 doesn't need the "Key" attribute and removing it seems to solve the <img> source problem.
Try with an URL like this:
http://maps.googleapis.com/maps/api/staticmap?center=0.0000,0.0000&zoom=13&size=200x200&maptype=roadmap&markers=0.0000,0.0000&sensor=false
For more information read this.
Yeah, Google Maps API version 3 were java-script version; "Google Static Maps" latest were 2.0. I suspect there might be some restriction on use.
I could also not display static maps and could see 403 error in the browser's network console.
http response headers:
status:403
x-content-type-options:nosniff
I had an API key with a lot of Google Maps APIs enabled but the Google Static Maps API was missing, enabling it solved the issue.
now you should use 'signature' parameter, which you should add to request - otherwise static maps won't work.
here is few useful links
1) how to generate signature
2) how to make signature on BE side (code snippet)
I am using Wordpress 4.9.4 with ChurchThemes Exodus Theme. I had applied for & generated a New API_KEY.
I confirmed it was being used when calling the map:
Google Map Link
However the Js Console showed the following error:
Google Maps Error in Js Console
As Johnny White mentioned above I had to navigate to the API Library Screen via APIs & Services Menu:
enter image description here
You will be greeted by the API Library screen:
API Library Screen
Click on Maps(17) Lower LHS.
Search for & click Google Static Maps API - Enable it if needed:
Google Static Maps API
You may also need to enable Google Maps Javascript API (same process as for Static Maps:
Google Maps Javascript API
Once that is done your maps should start appearing on your site or app.
If they don't appear on refresh you may need to:
clear your cache (Wordpress or Drupal webistes),
wait the 5 min recommended for the API to Register the enabled API's
Try enabling billing on this Google Cloud Project/Firebase Project.
I was experiencing this same issue and just received the 403 error in the console.
Copying and pasting the Static Maps URL in to the URL bar and loading it showed the following error message:
The Google Maps Platform server rejected your request. You must enable Billing on the Google Cloud Project at
https://console.cloud.google.com/project/_/billing/enable Learn more at https://developers.google.com/maps/gmp-get-started
Hope this helps!

Resources