Unable to connect to AWS Docdb - errbit

We are trying to connect to aws docdb from errbit, but of no luck. This is the connection string from docdb:
mongodb://user:<insertYourPassword>#dev-docdb-cluster.cluster-xxxx.us-east-1.docdb.amazonaws.com:27017/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0
We are able to connect to Atlas db though, the connection string format we are using for atlas is something like this:
mongodb://user:pass#cluster-shard-00-00-xxx.mongodb.net:27017,cluster-shard-00-01-xxx.mongodb.net:27017,cluster-shard-00-02-xxx.mongodb.net:27017/errbit?ssl=true&replicaSet=Cluster-shard-0&authSource=admin&w=majority

You will need to change config/mongo.rb file to be as follows:
log_level = Logger.const_get Errbit::Config.log_level.upcase
Mongoid.logger.level = log_level
Mongo::Logger.level = log_level
Mongoid.configure do |config|
uri = if Errbit::Config.mongo_url == 'mongodb://localhost'
"mongodb://localhost/errbit_#{Rails.env}"
else
Errbit::Config.mongo_url
end
config.load_configuration(
clients: {
default: {
uri: uri,
options: { ssl_ca_cert: Rails.root.join('rds-combined-ca-bundle.pem') }
}
},
options: {
use_activesupport_time_zone: true
}
)
end
You can notice that this is exactly the same as current one except for I added:
options: { ssl_ca_cert: Rails.root.join('rds-combined-ca-bundle.pem') }
It did work for me after doing this :) Of course you need rds-combined-ca-bundle.pem file to be present at your Rails root folder.
wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem
Yes, I've had to create a docker image with the following code:
FROM errbit/errbit:latest
LABEL maintainer="Tarek N. Elsamni <tarek.samni+stackoverflow#gmail.com>"
WORKDIR /app
RUN wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem
COPY ["mongo.rb", "/app/config/"]

Related

How do I connect redis and redis-insight containers on my network

I've written a .tf file that spins up a redis and redis-insight container in their private docker network (openstack instance), but when I ngrok to redis-insight I get this error:
Redis-insight in browser
I can't seem to get the environment variables on the redis-insight resource right.
I've tried many combinations of the env vars in the redis-insight resource.
Since I'm using ngrok for tunneling I set the RITRUSTEDORIGINS var to its port (http://localhost:4040) following the example of this page in the redis documentation that uses nginx as a proxy, but to no luck.
What environment variables should I be using on my redis-insight resource?
This is what I have written so far:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "2.23.1"
}
}
}
provider "docker" {}
resource "docker_network" "redis_network" {
name = "redis_network"
}
resource "docker_image" "redis" {
name = "redis:latest"
keep_locally = false
}
resource "docker_container" "redis" {
image = docker_image.redis.image_id
name = "redis"
ports {
internal = 6379
external = 6379
}
network_mode = docker_network.redis_network.name
}
resource "docker_image" "redis-insight" {
name = "redislabs/redisinsight:latest"
keep_locally = false
}
resource "docker_container" "redis-insight" {
image = docker_image.redis-insight.image_id
name = "redis-insight"
ports {
internal = 8001
external = 8001
}
network_mode = docker_network.redis_network.name
depends_on = [docker_container.redis]
env = [
"REDIS_URL=redis://redis:6379",
"REDIS_PASSWORD=password",
# "REDIS_DATABASE=1",
# "REDIS_TLS=true",
# "INSIGHT_DEBUG=true",
# "RIPORT=8001",
# "RIPROXYENABLE=t",
"RITRUSTEDORIGINS=http://localhost:4040"
]
}
Whats the hostname and port of RedisInsight you are accessing from your browser? If its not localhost:4040, set that in RITRUSTEDORIGINS.
If it is localhost:4040, set RITRUSTEDORIGINS to http://localhost:4040.
Set the right protocol (http or https), hostname and port. This should match the one you use in browser.

Terraform template_cloudinit_config multiple part execution order is wrong

I am using the terraform to build my ec2-instances as part of instance bootstrap, added cloud-init config to run multiple userdata scripts. but the content_type = "text/x-shellscript" always executed first. I verified the cat /var/log/cloud-init-output.log file. it shows the shell script is invoked first. How do I config the shell script to run at last?
data "template_cloudinit_config" "myapp_cloudinit_config" {
gzip = false
base64_encode = false
# Main cloud-config configuration file.
part {
content_type = "text/cloud-config"
content = "${data.template_file.base_bootstrap_file.rendered}"
merge_type = "list(append)+dict(recurse_array)+str()"
}
part {
content_type = "text/cloud-config"
content = "${module.template_file_appsec_init.appsec_user_data_rendered}"
merge_type = "list(append)+dict(recurse_array)+str()"
}
part {
content_type = "text/x-shellscript"
content = "${module.template_file_beat_init.beat_user_data_rendered}"
}
}
Shell script looks like below
module " template_file_beat_init" {
source = "url" #the source url contains the zip file which includes the below shell script
}
#!/bin/sh
deploy_the_app() {
//invoke ansible playbook execution
}
deploy_the_app
Cloud provider: AWS
OS : RHEL 8.3
cloud-init --version: /usr/bin/cloud-init 19.4
Terraform v0.11.8

How can I run a shell script on multiple VMWare vm's created by terraform module?

I am using this module to spin up multiple vm's on my vmware cluster, https://registry.terraform.io/modules/Terraform-VMWare-Modules/vm/vsphere/1.6.0, and I want to run a shell script on all of the vms after using a null resource. With what i currently have, it complains that the host was not given a string, which makes sense. Here is my null resource:
# main.tf
module "jenkins-linuxvm-centos7" {
source = "Terraform-VMWare-Modules/vm/vsphere"
...
}
resource "null_resource" "vm" {
triggers = {
vm_ips = join(",", module.jenkins-linuxvm-centos7.Linux-ip)
}
# export TF_VAR_root_password=<pass>
connection {
type = "ssh"
host = module.jenkins-linuxvm-centos7.Linux-ip
user = "root"
password = var.vm_root_password
port = "22"
agent = false
}
provisioner "file" {
source = "resize_disk.sh"
destination = "/tmp/resize_disk.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/resize_disk.sh",
"/tmp/resize_disk.sh"
]
}
}
Do I need to use a dynamic block somehow? Or how can I modify host = module.jenkins-linuxvm-centos7.Linux-ip to include all the hosts I want to run it on?
You have to run it in a For_Each loop... Below is an example code where i am looping against the sql_var map variable. you will have to do it against the output of IPs --> module.jenkins-linuxvm-centos7.Linux-ip... you will be able to reference the IP of each machine as something like each.value i guess. I dont know how your output looks like, so guessing. If you are new to loops, here is one nice tuto.
https://blog.boltops.com/2020/10/04/terraform-hcl-loops-with-count-and-for-each
resource "null_resource" "instance" {
for_each = var.sql_var
provisioner "local-exec" {
command = "echo ${each.key} >> hello.txt"
}
}

Terraform stuck on `Refreshing state...` when running against `localstack`

I am using Terraform to publish lambda to AWS. It works fine when I deploy to AWS but stuck on "Refreshing state..." when running against localstack.
Below is my .tf config file as you can see I configured the lambda endpoint to be http://localhost:4567.
provider "aws" {
profile = "default"
region = "ap-southeast-2"
endpoints {
lambda = "http://localhost:4567"
}
}
variable "runtime" {
default = "python3.6"
}
data "archive_file" "zipit" {
type = "zip"
source_dir = "crawler/dist"
output_path = "crawler/dist/deploy.zip"
}
resource "aws_lambda_function" "test_lambda" {
filename = "crawler/dist/deploy.zip"
function_name = "quote-crawler"
role = "arn:aws:iam::773592622512:role/LambdaRole"
handler = "handler.handler"
source_code_hash = "${data.archive_file.zipit.output_base64sha256}"
runtime = "${var.runtime}"
}
Below is docker compose file for localstack:
version: '2.1'
services:
localstack:
image: localstack/localstack
ports:
- "4567-4583:4567-4583"
- '8055:8080'
environment:
- SERVICES=${SERVICES-lambda }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-docker-reuse }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
Does anyone know how to fix the issue?
This is how i fixed similar issue :
Set export TF_LOG=TRACE which is the most verbose logging.
Run terraform plan ....
In the log, I got the root cause of the issue and it was :
dag/walk: vertex "module.kubernetes_apps.provider.helmfile (close)" is waiting for "module.kubernetes_apps.helmfile_release_set.metrics_server"
From logs, I identify the state which is the cause of the issue: module.kubernetes_apps.helmfile_release_set.metrics_server.
I deleted its state :
terraform state rm module.kubernetes_apps.helmfile_release_set.metrics_server
Now run terraform plan again should fix the issue.
This is not the best solution, that's why I contacted the owner of this provider to fix the issue without this workaround.
The reason I failed because terraform tries to check credentials against AWS. Add below two lines in your .tf configuration file solves the issue.
skip_credentials_validation = true
skip_metadata_api_check = true
I ran into the same issue and fixed it by logging into the aws dev profile from the console.
So don't forget to log in.
provider "aws" {
region = "ap-southeast-2"
profile = "dev"
}

How should I run a mysql command with nomad and arguments?

I am interested in using nomad to initialize a mysql database on a windows server that is without docker. I tried to create a job using the "exec" driver, "mysql" command, and an args argument with the host, username, password, etc.
job "gavin-setup" {
datacenters = ["dc1"]
group "gavin-setup-group" {
task "setup" {
driver = "exec"
config = {
command = "mysql"
args = [
"-hlocalhost",
"-ugavin",
"-pgavin_secret",
"gavin_database",
"<",
"C:\\gavin\\config\\create.sql"
]
}
}
}
}
The arguments aren't being passed along. I also tried removing the args and just using the command which did not work:
mysql -hlocalhost -ugavin -pgavin_secret gavin_database < C:\\gavin\\config\\create.sql
Is it possible to do this kind of database initialization setup on a first-time run of the application? Should nomad be capable of doing this? If not - should I be using some other process to do this kind of setup?
I ended up using the full path to the mysql.exe and discovered that nomad logs -stderr <assoc-id> was really helpful.
job "gavin-setup" {
datacenters = ["dc1"]
group "gavin-setup-group" {
task "create_db" {
driver = "exec"
config = {
command = "C:/Program Files (x86)/MySQL/MySQL Server 5.6/bin/mysql.exe"
args = [
"-uroot",
"-pgavin_secret",
"-e",
"create database if not exists gavin_db;"
]
}
}
}
}

Resources