Creating Reachability Analyzer paths using Terraform - amazon-ec2

As part of my AWS deployment, I would like to deploy a set of paths into the Reachability Analyzer that I can use to diagnose connectivity problems in my application.
Does anyone know if this is possible using Terraform? I cannot find any examples so am presuming not at this point.

Its not yet supported, but there is already a github issue on that for aws provider:
VPC Reachability Analyzer / EC2 Network Insights

Related

Amazon’s AWS ability to perform a firewall rule, based on the source IP ports

Microsoft Azure has security groups, which allow filtering on the source IP ports. Does Amazon’s AWS have the same feature, to program from the command line?
Not only does it have it, it is also called the same. It can be accessed by downloading the AWS CLI. Most of the commands relating to security groups will be in aws ec2, for example aws ec2 describe-security-groups.
Some of the commands in pertaining to security groups can be fairly confusing so you might want to look at the GUI before your first time, and reading the docs will be helpful (as it usually is).

Deploying services in GKE k8s cluster using Golang k8s client

I'm able to create a GKE cluster using the golang container lib here.
Now for my golang k8s client to be able to deploy my k8s deployment files there, I need to get the kubeconfig from the GKE cluster. However I can't find the relevant api for that in the container lib above. Can anyone please point out what am I missing ?
As per #Subhash suggestion I am posting the answer from this question:
The GKE API does not have a call that outputs a kubeconfig file (or
fragment). The specific processing between fetching a full cluster
definition and updating the kubeconfig file are implemented in python
in the gcloud tooling. It isn't part of the Go SDK so you'd need to
implement it yourself.
You can also try using kubectl config set-credentials (see
this) and/or see if you can vendor the libraries that implement
that function if you want to do it programmatically.

Elasticsearch as a service for GCP

As far as I'm aware, there are no managed elasticsearch solutions provided by Google Cloud Platform, such as there is Amazon Elasticsearch Service on AWS.
I've opened a feature request ticket for this on the issue-tracker here, but I was wondering if there is a service somewhere on GCP that I'm missing? If not, are there plans to build an ES service on top of GCP? And if so, is there a general timeline on when that will be GA?
When configuring your cluster on ES Cloud (the cloud operated by Elastic Inc), you have the choice between hosting it on AWS or on GCP. If you pick GCP, the cluster is fully managed by Elastic on GCP.
This is a commercial feature (but AWS Elasticsearch is too), but you have a 14 days free trial to see how it looks like.
Also worth reading:
https://www.elastic.co/blog/hosted-elasticsearch-services-roundup-elastic-cloud-and-amazon-elasticsearch-service
https://www.elastic.co/aws-elasticsearch-service
Thank you for creating a feature request!
Regarding Elasticsearch on GCP, I am not 100% sure if it will apply for your case but there is a solution on Google Marketplace. It is Elasticsearch Service on Elastic Cloud offered by Google on GCP. Check it out and see if you can use it.

Cannot get Fabric8 to fully launch in AWS using stackpoint

I have been trying to spin up a Kubernetes/Fabric8 installation on AWS using Stackpoint as described in this video: https://www.youtube.com/watch?v=lNRpGJTSMKA
My problem is that three of the apps wont start becuase no volumes are available and I cannot see how to resolve those PV requests. For example Gogs is reporting the following error:
Unable to mount volumes for pod "gogs-2568819805-bcw8e_default(03d618b9-7477-11e6-8c6b-0a945216fb91)": timeout expired waiting for volumes to attach/mount for pod "gogs-2568819805-bcw8e"/"default". list of unattached/unmounted volumes=[gogs-data]
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "gogs-2568819805-bcw8e"/"default". list of unattached/unmounted volumes=[gogs-data]
I am pretty sure this is very simple but cannot see how to connect the dots here from the various K8, Fabric8 docs. I can create a new EBS volume in AWS easily enough but cannot see how to then update this running stack to attach it to these services. Any help would be greatly appreciated!
Sorry about that, what version of gofabric8 are you using? We're currently adding persistent volume support for the core platform apps although the integration our stackpoint isn't there quite yet. Hopefully soon though.
For now you should be able to disable the PV claims using --pv=false during the deploy. So gofabric8 deploy --pv=false. We'll look at using this as the default until the integration is there and we can leverage AWS persistent volumes
We just shipped functionality that allows you to create and manage AWS volumes for Kubernetes. You get a volume, PV, and claim - just name the claim to be what is required by Fabric8. Eventually, you'll be able to use dynamic volume creation.

Getting started with Fabric8, AWS using stackpoint

I have historically used a lot of manual chaining to get a CI pipeline in place for microservice development so am excited to try Fabric8 as it seems that it will make life a lot easier. Running into some early issues though.
I did manage to get Fabric8 running locally but want to get things running on AWS so I can present a more real world flow to stakeholders. Following the notes on this page Fabric8 on AWS I was able to get a 3 server cluster running using Stackpoint. But, I cannot connect to that cluster to be able to start administering the services. The page references this link (http://fabric8.default.replace.me.io) but it is not working for me. Tried hitting each of the AWS instances by public IP but that failed also. What would be my next steps here?
yeah the getting started guides don't really explain this in great deal. There's a similar issue on the fabric8 issue tracker that we've tried to help answer how to access the console
TL;DR using the AWS loadbalancer can add expense so we deploy an NGINX reverse proxy so you can set up a wildcard DNS. We use and recommend cloudflare for that as its free for this type of use and fast to setup.
We also created a blog to explain the different options how to access apps on kubernetes
Hope that helps!

Resources