I am trying to execute shell script placed in a bucket using dataflow job. I can execute gsutil commands using this job using Direct Runner :
String[] cmdline = { "cmd.exe", "/c", "gsutil ls gs://Bucketname" };
Process p = Runtime.getRuntime().exec(cmdline);
BufferedReader reader = new BufferedReader(new
InputStreamReader(p.getInputStream()));
String line = null;
while ((line = reader.readLine()) != null)
{
System.out.println(line);
}
Note : I will use dataflow runner to execute the script because i am using windows machine.
Try using this. It runs in my case. You have to deploy the code in cloud as a .jar or a maven project. The path /home/*/test.sh is in the cloud console.
String[] cmd = {"sh", "/home/akash/test.sh", "/home/akash/"};
Runtime.getRuntime().exec(cmd);
Related
I'm trying to manage router via Java application using Jcraft Jsch library.
I'm trying to send Router Config via TFTP server. The problem is in my Java code because this works with PuTTY.
This my Java code:
int port=22;
String name ="R1";
String ip ="192.168.18.100";
String password ="root";
JSch jsch = new JSch();
Session session = jsch.getSession(name, ip, port);
session.setPassword(password);
session.setConfig("StrictHostKeyChecking", "no");
System.out.println("Establishing Connection...");
session.connect();
System.out.println("Connection established.");
ChannelExec channelExec = (ChannelExec)session.openChannel("exec");
InputStream in = channelExec.getInputStream();
channelExec.setCommand("enable");
channelExec.setCommand("copy run tftp : ");
//Setting the ip of TFTP server
channelExec.setCommand("192.168.50.1 : ");
// Setting the name of file
channelExec.setCommand("Config.txt ");
channelExec.connect();
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
String line;
int index = 0;
StringBuilder sb = new StringBuilder();
while ((line = reader.readLine()) != null)
{
System.out.println(line);
}
session.disconnect();
I get
Line has an invalid autocommand '192.168.50.1'
The problem is how can I run those successive commands.
Calling ChannelExec.setCommand multiple times has no effect.
And even if it had, I'd guess that the 192.168.50.1 : and Config.txt are not commands, but inputs to the copy run tftp : command, aren't they?
If that's the case, you need to write them to the command input.
Something like this:
ChannelExec channel = (ChannelExec) session.openChannel("exec");
channelExec.setCommand("copy run tftp : ");
OutputStream out = channelExec.getOutputStream();
channelExec.connect();
out.write(("192.168.50.1 : \n").getBytes());
out.write(("Config.txt \n").getBytes());
out.flush();
In general, it's always better to check if the command has better "API" than feeding the commands to input. Commands usually have command-line arguments/switches that serve the desired purpose better.
A related question: Provide inputs to individual prompts separately with JSch.
public void readFile(String file) throws IOException {
Cofiguration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop-2.7.3/etc/hadoop/core-site.xml"))
conf.addResource(new Path("/usr/local/hadoop-2.7.3/etc/hadoop/hdfs-site.xml"))
conf.addResource(new Path("/usr/local/hadoop-2.7.3/etc/hadoop/mapred-site.xml"))
}
FileSystem fileSystem = FileSystem.get(conf);
System.out.println("DefaultFS: " + cong.get("fs.defaultFS"));
System.out.println("Home directory: " + fileSystem.getHomeDirectory());
Path path = new Path(file);
if(!fileSystem.exists(path)) {
System.out.println("File " + file + " does not exists");
return;
}
I am very new to Hadoop and I am wondering if it is possible to execute this Hadoop Java Client code using "java -jar".
My code works using the "hadoop jar" command. However, when I try to execute this code using "java -jar" instead of "hadoop jar", it can't locate the file in HDFS and the method getHomeDirectory() returns a local path that doesn't exist.
Is my configuration files not added correctly? Why does the code only work when executed under Hadoop command?
Instead of passing a Path object, pass the file path as string
conf.addResource("/usr/local/hadoop-2.7.3/etc/hadoop/core-site.xml");
conf.addResource("/usr/local/hadoop-2.7.3/etc/hadoop/hdfs-site.xml");
conf.addResource("/usr/local/hadoop-2.7.3/etc/hadoop/mapred-site.xml");
Or else you could add these files to classpath and try.
conf.addResource("core-site.xml");
conf.addResource("hdfs-site.xml");
conf.addResource("mapred-site.xml");
How to execute Linux command or shell script from APACHE JMETER
Do anyone know how to execute linux commands from Jmeter?
I found this link online http://www.technix.in/execute-linux-command-shell-script-apache-jmeter/ and I tried the steps, but is not working. I can't see the SSH Sampler.
If anyone had any success with running shell scripts from Jmeter please share.
Thanks in advance
If you need to execute a command on remote system take the following steps:
Download JSch.jar - the library which provides SSH and SCP protocols operations from Java language and place it to /lib folder of your JMeter installation
Download groovy-all.jar - Groovy scripting engine support for Jmeter and drop it to the /lib folder as well
Restart JMeter to pick the libraries up
Add JSR223 Sampler to your Test Plan and choose "groovy" from "Language" drop-down
Follow example code from Exec.java Jsch tutorial to implement your own logic.
You can also refer to below snippet which executes ls command on a remote *nix system and returns command execution result. Make sure that you provide valid username, hostname and password in order so sampler could work
import com.jcraft.jsch.Channel;
import com.jcraft.jsch.ChannelExec;
import com.jcraft.jsch.JSch;
import com.jcraft.jsch.Session;
JSch jSch = new JSch();
Session session = jSch.getSession("username", "hostname", 22);
session.setConfig("StrictHostKeyChecking", "no");
session.setPassword("password");
session.connect();
Channel channel = session.openChannel("exec");
String command = "ls";
((ChannelExec) channel).setCommand(command);
channel.setInputStream(null);
((ChannelExec) channel).setErrStream(System.err);
InputStream in = channel.getInputStream();
channel.connect();
StringBuilder rv = new StringBuilder();
rv.append("New system date: ");
byte[] tmp = new byte[1024];
while (true) {
while (in.available() > 0) {
int i = in.read(tmp, 0, 1024);
if (i < 0) break;
rv.append(new String(tmp, 0, i));
}
if (channel.isClosed()) {
break;
}
try {
Thread.sleep(100);
} catch (Exception ee) {
ee.printStackTrace();
}
}
in.close();
channel.disconnect();
session.disconnect();
SampleResult.setResponseData(rv.toString().getBytes());
See Beanshell vs JSR223 vs Java JMeter Scripting: The Performance-Off You've Been Waiting For! for details on Groovy scripting engine installation and best scripting practices.
Have a look at OS Process Sampler which is done for this and available in core:
http://jmeter.apache.org/usermanual/component_reference.html#OS_Process_Sampler
You can use the Beanshell scripting inside jmeter then you can have some thing like that:
String command="your command here";
StringBuffer output = new StringBuffer();
Process p;
try {
p = Runtime.getRuntime().exec(command);
p.waitFor();
BufferedReader reader =
new BufferedReader(new InputStreamReader(p.getInputStream()));
String line = "";
while ((line = reader.readLine())!= null) {
output.append(line + "\n");
}
} catch (Exception e) {
e.printStackTrace();
}
log.inf(output.toString());
Look here
this answer about execute jar-file. But idea the same for other OS-command.
Use OS_Process_Sampler
Fill field command with OS shell, like /bin/bash or analogue.
Set first argument with -c
Set next argument with you command
Sampler ready to execute :)
I am trying to run my hadoop program in Amazon Elastic MapReduce system. My program takes an input file from the local filesystem which contains parameters needed for the program to run. However, since the file is normally read from the local filesystem with FileInputStream the task fails when executed in AWS environment with an error saying that the parameter file was not found. Note that, I already uploaded the file into Amazon S3. How can I fix this problem? Thanks. Below is the code that I use to read the paremeter file and consequently read the parameters in the file.
FileInputStream fstream = new FileInputStream(path);
FileInputStream os = new FileInputStream(fstream);
DataInputStream datain = new DataInputStream(os);
BufferedReader br = new BufferedReader(new InputStreamReader(datain));
String[] args = new String[7];
int i = 0;
String strLine;
while ((strLine = br.readLine()) != null) {
args[i++] = strLine;
}
If you must read the file from the local file system, you can configure your EMR job to run with a boostrap action. In that action, simply copy the file from S3 to a local file using s3cmd or similar.
You could also go through the Hadoop FileSystem class to read the file, as I'm pretty sure EMR supports direct access like this. For example:
FileSystem fs = FileSystem.get(new URI("s3://my.bucket.name/"), conf);
DataInputStream in = fs.open(new Path("/my/parameter/file"));
I did not try Amazon Elastic yet, however it looks like a classical application of distributed cache. You add file do cache using -files option (if you implement Tool/ToolRunner) or job.addCacheFile(URI uri) method, and access it as if it existed locally.
You can add this file to the distributed cache as follows :
...
String s3FilePath = args[0];
DistributedCache.addCacheFile(new URI(s3FilePath), conf);
...
Later, in configure() of your mapper/reducer, you can do the following:
...
Path s3FilePath;
#Override
public void configure(JobConf job) {
s3FilePath = DistributedCache.getLocalCacheFiles(job)[0];
FileInputStream fstream = new FileInputStream(s3FilePath.toString());
...
}
I use MEF to extend my web application and I use the following folder structure
> bin
> extensions
> Plugin1
> Plugin2
> Plugin3
To achive this automatically, the plugin projects output paths are set to these directories. My application is working with and without azure. My problem is now, that it seems to be inpossible to include the extensions subdirectory automatically to the azure deployment package.
I've tried to set the build dependencies too, without success.
Is there another way?
Well,
I've struggled with the bin folder. The issue (if we may say "issue") is that the packaging process, just packs what is "copy to out directory" set to "copy if newer/aways" only for the Web application (Web Role) project. Having another assemblies in the BIN which are not explicitly referenced by the Web Application will not get deployed.
For my case, where I have pretty "static" references I just pack them in a ZIP, put them in a BLOB container and then use the Azure Bootstrapper to download, extract and put in the BIN folder these references. However, because I don't know the actual location of the BIN folder in a startup task, I use helper wrappers for the bootstrapper to make the trick.
You will need to get the list of local sites, which can be accomplished by something similar to:
public IEnumerable<string> WebSiteDirectories
{
get
{
string roleRootDir = Environment.GetEnvironmentVariable("RdRoleRoot");
string appRootDir = (RoleEnvironment.IsEmulated) ? Path.GetDirectoryName(AppDomain.CurrentDomain.BaseDirectory) : roleRootDir;
XDocument roleModelDoc = XDocument.Load(Path.Combine(roleRootDir, "RoleModel.xml"));
var siteElements = roleModelDoc.Root.Element(_roleModelNs + "Sites").Elements(_roleModelNs + "Site");
return
from siteElement in siteElements
where siteElement.Attribute("name") != null
&& siteElement.Attribute("name").Value == "Web"
&& siteElement.Attribute("physicalDirectory") != null
select Path.Combine(appRootDir, siteElement.Attribute("physicalDirectory").Value);
}
}
Where the _roleModelNs variable is defined as follows:
private readonly XNamespace _roleModelNs = "http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition";
Next you will need something similar to that method:
public void GetRequiredAssemblies(string pathToWebBinfolder)
{
string args = string.Join("",
#"-get https://your_account.blob.core.windows.net/path/to/plugin.zip -lr $lr(temp) -unzip """,
pathToWebBinfolder,
#""" -block");
this._bRunner.RunBootstrapper(args);
}
And the RunBootstrapper has following signature:
public bool RunBootstrapper (string args)
{
bool result = false;
ProcessStartInfo psi = new ProcessStartInfo();
psi.FileName = this._bootstrapperPath;
psi.Arguments = args;
Trace.WriteLine("AS: Calling " + psi.FileName + " " + psi.Arguments + " ...");
psi.CreateNoWindow = true;
psi.ErrorDialog = false;
psi.UseShellExecute = false;
psi.WindowStyle = ProcessWindowStyle.Hidden;
psi.RedirectStandardOutput = true;
psi.RedirectStandardInput = false;
psi.RedirectStandardError = true;
// run elevated
// psi.Verb = "runas";
try
{
// Start the process with the info we specified.
// Call WaitForExit and then the using statement will close.
using (Process exeProcess = Process.Start(psi))
{
exeProcess.PriorityClass = ProcessPriorityClass.High;
string outString = string.Empty;
// use ansynchronous reading for at least one of the streams
// to avoid deadlock
exeProcess.OutputDataReceived += (s, e) =>
{
outString += e.Data;
};
exeProcess.BeginOutputReadLine();
// now read the StandardError stream to the end
// this will cause our main thread to wait for the
// stream to close
string errString = exeProcess.StandardError.ReadToEnd();
Trace.WriteLine("Process out string: " + outString);
Trace.TraceError("Process error string: " + errString);
result = true;
}
}
catch (Exception e)
{
Trace.TraceError("AS: " + e.Message + e.StackTrace);
result = false;
}
return result;
}
Of course, in your case you might want something a bit more complex, where you'll first try to fetch all plugins (if each plugin is in its own ZIP) via code, and then execute the GetRequiredAssemblies multiple times for each plugin. And this code might be executing in the RoleEntryPoint's OnStart method.
And also, if you plan to be more dynamic, you can also override the Run() method of your RoleEntryPoint subclass, and check for new plugins every minute for example.
Hope this helps!
EDIT
And how can you get the plugins deployed. Well, you can either manually upload your plugins, or you can develop a small custom BuildTask to automatically upload your plugin upon build.