Is there a simple way of setting up rethinkdb mirrors? - rethinkdb

For example this will start two servers in a cluster
() rethinkdb -n A --directory DATA_A
() rethinkdb -n B --directory DATA_B --port-offset 1 --join localhost:29015
However they aren't mirrors/replicas as DATA_A and DATA_B aren't equivalent.
So, essentially I'm wondering if there's a way to start a cluster or server that would mirror databases and/or tables, whereby a new mirror could be added at any time and it would essentially catch up with another database or table and then continue syncing in real time.
Then, at any time any mirror could be dumped and the archives would be equivalent.
Any info would be much appreciated, thanks!!

Looks like this can be done for each table by setting the replica key on reconfigure () or tableCreate ().
e.g.
() rethinkdb -n A --directory DATA_A --bind all
() rethinkdb -n B --directory DATA_B --port-offset 1 --join localhost:29015
() rethinkdb -n C --directory DATA_C --port-offset 2 --join localhost:29015
(JS) After Connecting and Creating up a Database
var db = "";
var table = "";
r.db (db).tableCreate (table, {
replicas: 3
}).
run (connection, (err, res) => {
if (err) throw err;
});
Afterwards all documents inserted in table should be present in all three data directories.
() rethinkdb-dump -c localhost:28015 -f A.tar.gz
() rethinkdb-dump -c localhost:28016 -f B.tar.gz
() rethinkdb-dump -c localhost:28017 -f C.tar.gz

Related

How to make kubectl commands work inside my aws lambda bootstrap code example?

I would like to invoke an aws lambda function from a java project, first thing first, the java project sends a payload to lambda, then lambda processes this payload and execute some kubectl commands. Right now I am using lambda-layer-kubectl in order to use kubectl inside lambda function.
Java project code is below:
// snippet-start:[lambda.java2.invoke.main]
public static void invokeFunction(LambdaClient awsLambda, String functionName) {
InvokeResponse res = null ;
try {
// Need a SdkBytes instance for the payload.
JSONObject jsonObj = new JSONObject();
jsonObj.put("number", 80);
String json = jsonObj.toString();
SdkBytes payload = SdkBytes.fromUtf8String(json) ;
// Setup an InvokeRequest.
InvokeRequest request = InvokeRequest.builder()
.functionName(functionName)
.payload(payload)
.build();
res = awsLambda.invoke(request);
String value = res.payload().asUtf8String() ;
System.out.println(value);
} catch(LambdaException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
I am using Tutorial – Publishing a custom runtime to build my lambda function.
My bootstrap code is below:
#!/bin/sh
set -euo pipefail
export HOME="/tmp"
export PATH=$PATH://opt/awscli:/opt/kubectl:/opt/helm:/opt/jq
mkdir -p /tmp/.kube
cp kubeConfig /tmp/.kube/config
# Handler format: <script_name>.<bash_function_name>
# The script file <script_name>.sh must be located at the root of your
# function's deployment package, alongside this bootstrap executable.
# Initialization - load function handler
source $LAMBDA_TASK_ROOT/"$(echo $_HANDLER | cut -d. -f1).sh"
# Processing
while true
do
HEADERS="$(mktemp)"
# Get an event. The HTTP request will block until one is received
EVENT_DATA=$(curl -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
# Extract request ID by scraping response headers received above
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Run the handler function from the script
RESPONSE=$($(echo "$_HANDLER" | cut -d. -f2) "$EVENT_DATA" | jq ".number")
if [[ $RESPONSE == 80 ]]
then
TEST=$(echo "1111")
cp 80.yaml /tmp/80.yaml
kubectl apply -f test-80.yaml
fi
curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$TEST"
done
I got "{"errorMessage":"2022-11-20T20:35:41.005Z e0d3dedb-3b82-4007-9bf6-5649eddda916 Task timed out after 3.01 seconds"}
Process finished with exit code 0" after running the java project."
Tried extend lambda running time to 10s, still time out.
However, if I put "cp 80.yaml /tmp/80.yaml kubectl" and "apply -f test-80.yaml" outside while loop, behind "cp kubeConfig /tmp/.kube/config", kubernetes job will be created successfully.
However, if I put "cp 80.yaml /tmp/80.yaml kubectl" and "apply -f test-80.yaml" outside while loop, behind "cp kubeConfig /tmp/.kube/config", kubernetes job will be created successfully.
I expect kubectl commands to execute successfully and new kube job to be created.
Could somebody help me with it? Thank you very much in advance.

How much can I chain bash commands?

Very quick question. I have put together a few aliases to make these tedious bluetooth gymnastics that Apple has forced down our throats over the past few years a lot easier. My question is in the following bash aliases may I chain multiple "&&'s or &'s" together so that I don't have to make multiple aliases in my .zshrc file? For instance this is what I currently have:
alias sonosconnect="bluetoothconnector --connect 54-2a-1b-bf-2c-dc && switchaudiosource -s SonosRoam"
alias sonos="blueutil -p 1 && sonosconnect"
alias bton="blueutil -p 1"
My end result would essentially be combining these 3 aliases into one long alias, is this possible by using more than one instance of an &&?
You shouldn't be using aliases at all. Use functions instead.
sonosconnect () {
bluetoothconnector --connect 54-2a-1b-bf-2c-dc && switchaudiosource -s SonosRoam
}
sonos () {
blueutil -p 1 && sonosconnect
}
bton () {
blueutil -p 1
}
some_name_for_all () {
sonosconnect
sonos
bton
}
There's no significant limit on how long the command line can be, but there are very few situations where an alias is a better choice than function, and many cases where a function is the better or even only appropriate choice.

The same result for different parameters

I have a strange situation. For different parameters I always get the same result
function test
{
while getopts 'c:S:T:' opt ; do
case "$opt" in
c) STATEMENT=$OPTARG;;
S) SCHEMA=$OPTARG;;
T) TABLE=$OPTARG;;
esac
done
echo "$STATEMENT, $SCHEMA, $TABLE"
}
test -c CREATE -S schema1 -T tabela1
test -c TRUNCATE -S schema2 -T tabela2
test -c DROP -S schema3 -T tabela3
Result:
CREATE, schema1, tabela1
CREATE, schema1, tabela1
CREATE, schema1, tabela1
What is failed in my script?
In bash, you need to localize the $OPTIND variable.
function test () {
local OPTIND
Otherwise it's global and the next call to getopts returns false (i.e. all arguments processed). Consider localizing the other variables, too, if they're not used outside of the function.
You can also just set it to zero.

Can bash script be written inside a AWS Lambda function

Can I write a bash script inside a Lambda function? I read in the aws docs that it can execute code written in Python, NodeJS and Java 8.
It is mentioned in some documents that it might be possible to use Bash but there is no concrete evidence supporting it or any example
AWS recently announced the "Lambda Runtime API and Lambda Layers", two new features that enable developers to build custom runtimes. So, it's now possibile to directly run even bash scripts in Lambda without hacks.
As this is a very new feature (November 2018), there isn't much material yet around and some manual work still needs to be done, but you can have a look at this Github repo for an example to start with (disclaimer: I didn't test it). Below a sample handler in bash:
function handler () {
EVENT_DATA=$1
echo "$EVENT_DATA" 1>&2;
RESPONSE="{\"statusCode\": 200, \"body\": \"Hello World\"}"
echo $RESPONSE
}
This actually opens up the possibility to run any programming language within a Lambda. Here it is an AWS tutorial about publishing custom Lambda runtimes.
Something that might help, I'm using Node to call the bash script. I uploaded the script and the nodejs file in a zip to lambda, using the following code as the handler.
exports.myHandler = function(event, context, callback) {
const execFile = require('child_process').execFile;
execFile('./test.sh', (error, stdout, stderr) => {
if (error) {
callback(error);
}
callback(null, stdout);
});
}
You can use the callback to return the data you need.
AWS supports custom runtimes now based on this announcement here. I already tested bash script and it worked. All you need is to create a new lambda and choose runtime of type Custom it will create the following file structure:
mylambda_func
|- bootstrap
|- function.sh
Example Bootstrap:
#!/bin/sh
set -euo pipefail
# Handler format: <script_name>.<function_name>
# The script file <script_name>.sh must be located in
# the same directory as the bootstrap executable.
source $(dirname "$0")/"$(echo $_HANDLER | cut -d. -f1).sh"
while true
do
# Request the next event from the Lambda Runtime
HEADERS="$(mktemp)"
EVENT_DATA=$(curl -v -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
INVOCATION_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Execute the handler function from the script
RESPONSE=$($(echo "$_HANDLER" | cut -d. -f2) "$EVENT_DATA")
# Send the response to Lambda Runtime
curl -v -sS -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$INVOCATION_ID/response" -d "$RESPONSE"
done
Example handler.sh:
function handler () {
EVENT_DATA=$1
RESPONSE="{\"statusCode\": 200, \"body\": \"Hello from Lambda!\"}"
echo $RESPONSE
}
P.S. However in some cases you can't achieve what's needed because of the environment restrictions, such cases need AWS Systems Manager to Run command, OpsWork (Chef/Puppet) based on what you're more familiar with or periodically using ScheduledTasks in ECS cluster.
More Information about bash and how to zip and publish it, please check the following links:
https://docs.aws.amazon.com/en_us/lambda/latest/dg/runtimes-walkthrough.html
https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html
As you mentioned, AWS does not provide a way to write Lambda function using Bash.
To work around it, if you really need bash function, you can "wrap" your bash script within any languages.
Here is an example with Java:
Process proc = Runtime.getRuntime().exec("./your_script.sh");
Depending on your business needs, you should consider using native languages(Python, NodeJS, Java) to avoid performance loss.
I just was able to capture a shell command uname output using Amazon Lambda - Python.
Below is the code base.
from __future__ import print_function
import json
import commands
print('Loading function')
def lambda_handler(event, context):
print(commands.getstatusoutput('uname -a'))
It displayed the output
START RequestId: 2eb685d3-b74d-11e5-b32f-e9369236c8c6 Version: $LATEST
(0, 'Linux ip-10-0-73-222 3.14.48-33.39.amzn1.x86_64 #1 SMP Tue Jul 14 23:43:07 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux')
END RequestId: 2eb685d3-b45d-98e5-b32f-e9369236c8c6
REPORT RequestId: 2eb685d3-b74d-11e5-b31f-e9369236c8c6 Duration: 298.59 ms Billed Duration: 300 ms Memory Size: 128 MB Max Memory Used: 9 MB
For More information check the link - https://aws.amazon.com/blogs/compute/running-executables-in-aws-lambda/
Its possible using the 'child_process' node module.
const exec = require('child_process').exec;
exec('echo $PWD && ls', (error, stdout, stderr) => {
if (error) {
console.log("Error occurs");
console.error(error);
return;
}
console.log(stdout);
console.log(stderr);
});
This will display the current working directory and list the files.
Now you can create Lambda functions written in any kind of language by providing a custom runtime which teaches the Lambda function to understand the syntax of the language you want to use.
You can follow this to learn more AWS Lambda runtimes
As others have pointed out, in Node.js you can use the child_process module, which is built-in to Node.js. Here's a complete working sample:
app.js:
'use strict'
const childproc = require('child_process')
module.exports.handler = (event, context) => {
return new Promise ((resolve, reject) => {
const commandStr = "./script.sh"
const options = {
maxBuffer: 10000000,
env: process.env
}
childproc.exec(commandStr, options, (err, stdout, stderr) => {
if (err) {
console.log("ERROR:", err)
return reject(err)
}
console.log("output:\n", stdout)
const response = {
statusCode: 200,
body: {
output: stdout
}
}
resolve(response)
})
})
}
script.sh:
#!/bin/bash
echo $PWD
ls -l
response.body.output is
/var/task
total 16
-rw-r--r-- 1 root root 751 Oct 26 1985 app.js
-rwxr-xr-x 1 root root 29 Oct 26 1985 script.sh
(NOTE: I ran this in an actual Lambda container, and it really does show the year as 1985).
Obviously, you can put whatever shell commands you want into script.sh as long as it's included in the Lambda pre-built container. You can also build your own custom Lambda container if you need a command that's not in the pre-built container.

mongodb export and remove docs

I would like to run a unix cron every day that does:
export all docs that are over 3 month old to a document
remove from the same docs from the collections.
For the export part I use:
mongoexport --db mydb --collection mycollection\ --query "`./test2.sh`" --out ./test2.json
and the "./test2.sh" file contains:
#!/bin/bash
d=`date --date="-3 month" -u +"%Y-%m-%dT%H:%M:%SZ"`
echo '{ "timeCreated": { "$lte": { "$date": "'$d'" } } }'
For the remove part I can do:
mongo mydb /home/dev/removeDocs.js
removeDocs.js:
var d = new Date();
d.setMonth(d.getMonth()-3);
db.GameHistory.remove({ timeCreated: { $lte: d} });
How can I synchronize the 2 commands? I want to run the remove commend after the export finished. can I merge the 2 to the same cron?
Yes, you can.
The easiest way is to merge both commands into single one-liner:
mongoexport --db mydb --collection mycollection\ --query "`./test2.sh`" --out ./test2.json && mongo mydb /home/dev/removeDocs.js
But I would recommend you to create shell script to archive your database:
#!/bin/bash
set -e # stop on first exception
mongoexport --db mydb --collection mycollection\ --query "`./test2.sh`" --out ./test2.json
mongo mydb /home/dev/removeDocs.js
If you want to append each new chunk of exported data, you should replace --out with standard unix stdio redirection:
mongoexport --db mydb --collection mycollection\ --query "`./test2.sh`" >> ./test2.json

Resources