how to get access from local machine to server-side terminal by using node - terminal

I want to get access to server-side terminal from node js code(from my local machine ) . I know that if you want to execute some terminal commands from code ( from node js code) , u can use child-process - but how to make connection to server-side terminal and send some commands there . Hope my question is clear .

If SSH is enabled in your server, you can use the NPM node-ssh package
var path, node_ssh, ssh, fs
fs = require('fs')
path = require('path')
node_ssh = require('node-ssh')
ssh = new node_ssh()
ssh.connect({
host: 'localhost',
username: 'steel',
privateKey: '/home/steel/.ssh/id_rsa'
}).then( function(){
/* your ssh interactions with the server go here */
/* use the node-ssh API to execute commands, upload files... */
})

Related

How to allow fs and multer to write files on AWS EC2

I am new to programming and recently finished a full MEAN stack project which is deployed on AWS EC2.
Project codes:
https://github.com/Cryfoo/13
Deployed at: http://35.166.172.216/
On server side, it uses fs.writeFile to save game logs after each game.
// Codes from server/config/game.js # lines 1361~1364
var filename = 10000 + roomNum;
filename = filename + "_" + endTime;
fs.writeFile("server/logs/" + filename + ".txt", variables[roomNum].logs, function(err) {});
On client side, it sends http request to server and uses multer to upload a profile picture of a user.
// Codes from server/controllers/user.js # lines 3~12
var storage = multer.diskStorage({
destination: function (req, file, callback) {
callback(null, "./client/static/profile");
},
filename: function(req, file, callback) {
callback(null, file.originalname);
}
});
var upload = multer({storage: storage, limits: {fileSize: 5000000}}).single("profile");
It works well locally on my laptop, but these two features do not work on EC2. I am assuming the problem has to do with allowing a permission to write files. How do I allow fs and multer to write files in EC2?
I have done a lot of searches for all the problems during this project and found solutions on stackoverflow and google, but for this problem I can't figure it out. I apologize if I am not being specific enough (First time posting a question here). Thank for the help in advance though.
If it's an issue with permissions on the folder, than you just need to change them
sudo chmod a+x -R server/logs
Edit: you probably need to run the command with sudo

Multiple script commands through SSH on Juniper OS

My question is concerning networking equipments, especially Juniper OS.
I would like to execute a lot of commands through SSH on the switch.
And not manually, with a script.
But, when I push a command through SSH (example : 'configure') to manipulate software configuration, it changes the prompt indeed.
And the next command, available only in this level of configuration, doesn't work, because the level up is for sure not registrated since last command, so new command = come back to inital prompt.
Example in Ruby with net/ssh :
ssh = Net::SSH.start("X.X.X.X", LOGIN, :password => PASSWORD)
ssh.exec!("configure") # -> Entering configuration mode
ssh.exec!("set system services telnet") # -> error: unknown command: set
ssh.close
On Juniper ILC, there isn't '&' or ';' to add mutlipe commands.
Is it possible to insert a carriage return in this kind of command and then put all commands in one request ?
Otherwise how can I execute several commands, keeping the link between them ?
Thanks in advance.
Ok, the only solution I found is to concatenate the instructions into one connection.
Example :
ssh.exec!('configure;
set system services telnet;
delete system services web-management;
set system login class READ permissions view-configuration;
set system login class READ allow-commands show;
...
commit;')
Hope this will help somebody, don't hesitate to improve it !
I know it's about Ruby but i hope my investigation results in Java could be useful. I used construction like this in eBay/parallec(i think in JSch it'll work too) :
public static void sshVmWithPassword() {
ParallelClient pc = new ParallelClient();
pc.prepareSsh().setTargetHostsFromString(HOST)
.setSshCommandLine("show version;\nshow log")
.setSshUserName(USERNAME)
.setSshPassword(PASSWORD)
.execute(new ParallecResponseHandler() {
public void onCompleted(ResponseOnSingleTask res,
Map<String, Object> responseContext) {
System.out.println("Responose:" + res.toString()
+ " host: " + res.getHost()
+ " errmsg: " + res.getErrorMessage());
}
});
pc.releaseExternalResources();
}

node.js: Run external command in new window

I'm writing a node.js script that permanently runs in the background and initiates file downloads from time to time. I want to use WGET for downloading the files, since it seems more robust until I really know what I'm doing with node.js.
The problem is that I want to see the progress of the downloads in question. But I can't find a way to start WGET through node.js in a way that a new shell window is opened and displayed for WGET.
Node's exec and spawn functions run external commands in the background and would let me access the output stream. But since my script runs in the background (hidden), there's not much I could do with the output stream.
I've tried opening a new shell window by running "cmd /c wget..." but that didn't make any difference.
Is there a way to run external command-line processes in a new window through node.js?
You can use node-progress and Node's http client (or requesT) for this, no need for wget:
https://github.com/visionmedia/node-progress
Example:
var ProgressBar = require('../')
, https = require('https');
var req = https.request({
host: 'download.github.com'
, port: 443
, path: '/visionmedia-node-jscoverage-0d4608a.zip'
});
req.on('response', function(res){
var len = parseInt(res.headers['content-length'], 10);
console.log();
var bar = new ProgressBar(' downloading [:bar] :percent :etas', {
complete: '='
, incomplete: ' '
, width: 20
, total: len
});
res.on('data', function(chunk){
bar.tick(chunk.length);
});
res.on('end', function(){
console.log('\n');
});
});
req.end();
UPDATE:
Since you want to do this in a background process and listen for the download progresses (in a separate process or what have you) you can achieve that using a pub-sub functionality, either:
use a message queue like Redis, RabbitMQ or ZeroMQ
start a TCP server on a known port / UNIX domain and listen to it
Resources:
http://nodejs.org/api/net.html

node.js simple tcp test

here is the code you can find every where on net
var net = require('net');
var server = net.createServer(function (socket) {
socket.write("Echo server\r\n");
socket.pipe(socket);
});
server.listen(1337, "127.0.0.1");
A simple tcp server will echo whatever you will send it. How to send data to it? What tools/commands I need in mac to test this server?
Use nc aka netcat. In Terminal.app, while your node app is running:
$ nc localhost 1337
Echo server
Ta-da!

Sending an email from R using the sendmailR package

I am trying to send an email from R, using the sendmailR package. The code below works fine when I run it on my PC, and I recieve the email. However, when I run it with my macbook pro, it fails with the following error:
library(sendmailR)
from <- sprintf("<sendmailR#%s>", Sys.info()[4])
to <- "<myemail#gmail.com>"
subject <- "TEST"
sendmail(from, to, subject, body,
control=list(smtpServer="ASPMX.L.GOOGLE.COM"))
Error in socketConnection(host = server, port = port, blocking = TRUE) :
cannot open the connection
In addition: Warning message:
In socketConnection(host = server, port = port, blocking = TRUE) :
ASPMX.L.GOOGLE.COM:25 cannot be opened
Any ideas as to why this would work on a PC, but not a mac? I turned the firewall off on both machines.
Are you able to send email via the command-line?
So, first of all, fire up a Terminal and then
$ echo “Test 123” | mail -s “Test” user#domain.com
Look into /var/log/mail.log, or better use
$ tail -f /var/log/mail.log
in a different window while you send your email. If you see something like
... setting up TLS connection to smtp.gmail.com[xxx.xx.xxx.xxx]:587
... Trusted TLS connection established to smtp.gmail.com[xxx.xx.xxx.xxx]:587:\
TLSv1 with cipher RC4-MD5 (128/128 bits)
then you succeeded. Otherwise, it means you have to configure you mailing system. I use postfix with Gmail for two years now, and I never had have problem with it. Basically, you need to grab the Equifax certificates, Equifax_Secure_CA.pem from here: http://www.geotrust.com/resources/root-certificates/. (They were using Thawtee certificates before but they changed last year.) Then, assuming you used Gmail,
Create relay_password in /etc/postfix and put a single line like this (with your correct login and password):
smtp.gmail.com login#gmail.com:password
then in a Terminal,
$ sudo postmap /etc/postfix/relay_password
to update Postfix lookup table.
Add the certificates in /etc/postfix/certs, or any folder you like, then
$ sudo c_rehash /etc/postfix/certs/
(i.e., rehash the certificates with Openssl).
Edit /etc/postfix/main.cf so that it includes the following lines (adjust the paths if needed):
relayhost = smtp.gmail.com:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/relay_password
smtp_sasl_security_options = noanonymous
smtp_tls_security_level = may
smtp_tls_CApath = /etc/postfix/certs
smtp_tls_session_cache_database = btree:/etc/postfix/smtp_scache
smtp_tls_session_cache_timeout = 3600s
smtp_tls_loglevel = 1
tls_random_source = dev:/dev/urandom
Finally, just reload the Postfix process, with e.g.
$ sudo postfix reload
(a combination of start/stop works too).
You can choose a different port for the SMTP, e.g. 465.
It’s still possible to use SASL without TLS (the above steps are basically the same), but in both case the main problem is that your login informations are available in a plan text file... Also, should you want to use your MobileMe account, just replace the Gmail SMTP server with smtp.me.com.

Resources