Dredd Apiary contract driven test - Is there any way to access private apiary documentation blueprint format from local dredd config? - apiblueprint

I'm running contract driven development tests using dredd.
I know how to configure dredd tests to run either against a local or remote server, given a blueprint apib file. Typically, the relevant fields in my dredd config file will read like
blueprint: myblueprintfile.apib
endpoint: localhost:3000 <or any remote server>
I didn't find a way to automatically refer to the remote blueprint hosted on apiary though. what i would like to achieve is something along the lines of
blueprint: <remote apiary apib file>
endpoint: localhost:3000 <or any remote server>
I can basically achieve the same result by manually fetching the blueprint using apiary CLI and saving it to a local file, before running the actual dredd tests
export APIARY_API_KEY=<key>
apiary fetch --api-name=<name>
Is there a way to achieve this step directly from dredd configuration file?
Notice:
i'm working with an authenticated apiary private account
i'm not worry about the endpoint field above, my problem is having the blueprint field pointing to a remote apiary source automatically
Maybe this question is a duplicated one, but i've looked at previously related questions and didn't find anything

While it's possible to point to a remotely stored .apib file, it will not work for a private documentation. At this point you can either:
Use the GitHub Sync to get the document on your machine
Automate fetching the document before testing with the Apiary CLI

Related

Running/Testing an AWS Serverless API written in Terraform

No clear path to do development in a serverless environment.
I have an API Gateway backed by some Lambda functions declared in Terraform. I deploy to the cloud and everything is fine, but how do I go about setting a proper workflow for development? It seems like a struggle to push every small code change to the cloud while developing in order to run your code. Terraform has started getting some support by the SAM framework to run your Lambda functions locally (https://aws.amazon.com/blogs/compute/better-together-aws-sam-cli-and-hashicorp-terraform/), but still no way to simulate a local server and test out your endpoints in Postman for example.
First of all I use serverless plugin instead of terraform, my answer is based on what you provided and what I found around.
From what I understood so far with priovided documentation you are able to run sam CLI with terraform (cf: Chapter Local testing)
You might follow this documentation to invoke local functions.
I recommend to use JSON files to create use cases instead of stdin injection.
First step is to create your payload in json file and to invoke your lambda with the json payload like
sam local invoke "YOUR_LAMBDA_NAME" -e ./path/to/yourjsonfile.json

Fetching profile based plain text config in spring cloud config server

We have config server running with config file stored in git. We have a service named proxy-server and its config is different in live and qa profiles. So we are keeping proxy-server.yml in each profile with different inside as shown beliw.
My service is a nginx based service. So I have to use plain text based config. For which I am calling
curl -XGET http://config:8100/proxy-service/live/latest/proxy-server.yml
I expect the contents of yml under live profile served here, but I see contents of file under QA profile. Why is this? how do I get profile based plain text config for any given service under a label?

Apply configuration yaml file via API or SDK

I started a pod in kubernetes cluster which can call kubernetes api via go-sdk (like in this example: https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration). I want to listen some external events in this pod (e.g. GitHub web-hooks), fetch yaml configuration files from repository and apply them to this cluster.
Is it possible to call kubectl apply -f <config-file> via kubernetes API (or better via golang SDK)?
As yaml directly: no, not that I'm aware of. But if you increase the kubectl verbosity (--v=100 or such), you'll see that the first thing kubectl does to your yaml file is convert it to json, and then POST that content to the API. So the spirit of the answer to your question is "yes."
This box/kube-applier project may interest you. While it does not appear to be webhook aware, I am sure they would welcome a PR teaching it to do that. Using their existing project also means you benefit from all the bugs they have already squashed, as well as their nifty prometheus metrics integration.

How do I deploy code directly from github to parse.

Right now I am deploying code to the parse cloud via the command line tool. Is it possible for parse to automatically deploy changes on my github master branch? If so how can I do this?
I don't believe this is possible with Parse alone.
However, you could use Github webhooks and the Parse CLI to configure your own setup with a server (used to deploy). The overview is:
1) Set up Parse CLI on your server so that you can run commands like parse deploy
2) Set up your server to listen for Github webhooks. e.g., http://yoursite.com/githubWebhook listen for POST requests.
3) When your endpoint receives the POST from github (confirmed via the secret included in the Github POST payload), you can run a script that executes the parse deploy command on your server.
Here's an example project in Node.js that shows how to set up the handler for Github webhooks. And here's a SO post describing how to execute commands in node.
Let me know if you need anymore clarification.

Deploying to Parse.com hosting from continous Integration

Does anyone know if it's possible to deploy to Parse.com hosting from CloudBees, Travis, or circle?
I'm aware of the commandline tool but I'm not sure how to integrate it with CI or if there is any other way.
I've found a solution that have worked well for me. Using travis-ci.com you can set it up to work with parse.com and github. Users commit to master branch and the code is auto deployed to Parse.com. Basically your credentials are encrypted using Travis's Ruby script (which can be found here: http://docs.travis-ci.com/user/encryption-keys/ . Once you're keys are made, you setup a .yml config file that, on travis downloads the parse sdk in a virtual environment, uses the hashed credentials to login to parse and then runs the parse deploy function resulting in a push to parse.

Resources