Rust ssh2 download using scp - windows

I've spent the day trying to use ssh2 to download a file by using scp. To test this I've set up a OpenSSH server on my localhost on port 22 and created some dummy files for the program to copy from one location to another. I'm running this on Windows 10 and I've not changed anything in my ssh config file since installing the OpenSSH feature this morning.
It works when I'm doing the following in cmd:
scp username#localhost:c/users/username/logs/from.log c:/users/username/logs/to.log
But for some reason when I'm running my program it exits with the following error:
Error: Error { code: Session(-28), msg: "Failed to recv file" }
This is the code I'm trying run and at row 10 (on the call to sess.scp_recv) is where I'm getting the error:
pub fn test_download() -> Result<(), Box<dyn Error>> {
let conn = TcpStream::connect("localhost:22")?;
let mut sess = ssh2::Session::new()?;
sess.set_tcp_stream(conn);
sess.set_timeout(5_000);
sess.handshake()?;
sess.userauth_password("username", "password")?;
let (mut remote_file, _) = sess.scp_recv(Path::new(r"c:\users\username\logs\from.log"))?;
let mut content = String::new();
remote_file.read_to_string(&mut content)?;
remote_file.send_eof()?;
remote_file.wait_eof()?;
remote_file.close()?;
remote_file.wait_close()?;
std::fs::write(r"c:\users\username\logs\to.log", &content)?;
Ok(())
}
I've been googling like a maniac and I can't find an explanation for this, so here I am. Note that I'm new to both Rust and ssh.
Update
I ran sshd.exe -d and connected using my program and the logging looks fine right up until the very end:
debug1: userauth-request for user user service ssh-connection method password [preauth]
debug1: attempt 0 failures 0 [preauth]
debug1: user domain\\user matched group list administrators at line 87
Accepted password for user from 127.0.0.1 port 52813 ssh2
debug1: monitor_child_preauth: user has been authenticated by privileged process
debug1: monitor_read_log: child log fd closed
debug1: Not running as SYSTEM: skipping loading user profile
**CreateProcessAsUserW failed error:1314**
fork of unprivileged child failed
debug1: do_cleanup
So it looks like there's some sort of privilege situation going on with Windows here.
Solved
I finally solved this by instead of using the scp_recv function I ran pscp -pw password -scp remote local using the std::process::Command crate and supplying the paths and password

Related

Websocket connection ssl problem on startup (Windows)

We have a program called server.exe which starts a websocket server (ws, wss) on computer of client.
It`s main purpose accept connections from browser (127.0.0.1) and send some data to it. It uses openssl dlls (1.0.2.20).
Problem: After startup of Windows server.exe does not work. It does not accept secure connections.
Debug Log with errors:
10.12.2019_16:11:09:0861 <<< ID = 728, msg: SSL library error during handshake on fd = 728 error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher
10.12.2019_16:11:09:0876 <<< ID = 592, msg: SSL library error during handshake on fd = 592 error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number
10.12.2019_16:11:09:0876 <<< ID = 776, msg: SSL library error during handshake on fd = 776 error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number
But!! If we just restart server.exe - everything begin to work fine!
if we launch server.exe with .bat file (5 sec dealy) - everything is working!
Why? How can we solve problem?
Fixed.
server.exe file cannot find path to the dll.

Application using MQTT protocol from azure sdk, doesn´t work behind a corporative proxy

I´m newbie in this matter, and I don´t know why my application just work and runs in a open network, when is behind a proxy I have a return error.
I´m using a raspberry zero, with raspbian Stretch, using azure-iot-sdk-python and proxy squid
I already try this things:
The proxy allow HTTPS connection, and it has all PORT are available and without any restriction and the address *****. azure-devices.net is put inside a whitelist in
$ nano / etc / squid / whitelist
Beyond that I set the proxy in the operate system, raspbian Stretch, in the
$ nano / etc / environment
the follow configurations:
export http_proxy = "http://192.168.2.254:3128/"
export https_proxy = "https://192.168.2.254:3128/"
export no_proxy = "localhost, 127.0.0.1"
And also in
$ nano ~ / .bashrc
export http_proxy = http: //192.168.2.254:3128
export https_proxy = https: //192.168.2.254:3128
export no_proxy = localhost, 127.0.0.1
And,
$ nano /etc/apt/apt.conf.d/90proxy
Acquire :: http :: Proxy "http://192.168.2.254:3128/";
Acquire :: https :: Proxy "https://192.168.2.254:3128/";
from iothub_client import IoTHubClient, IoTHubTransportProvider, IoTHubMessage
import time
CONNECTION_STRING = "HostName=******.azure-devices.net;DeviceId=***;SharedAccessKey=*********"
PROTOCOL = IoTHubTransportProvider.MQTT
def send_confirmation_callback(message, result, user_context):
print("Confirmation received for message with result = %s" % (result))
if __name__ == '__main__':
client = IoTHubClient(CONNECTION_STRING, PROTOCOL)
message = IoTHubMessage("test message")
client.send_event_async(message, send_confirmation_callback, None)
print("Message transmitted to IoT Hub")
while True:
time.sleep(1)
Error: File: /usr/sdk/src/c/c-utility/adapters/socketio_berkeley.c Func: lookup_address_and_initiate_socket_connection Line: 282 Failure: getaddrinfo failure -3.
Error: File: /usr/sdk/src/c/c-utility/adapters/socketio_berkeley.c Func: socketio_open Line: 765 lookup_address_and_connect_socket failed
Error: File: /usr/sdk/src/c/c-utility/adapters/tlsio_openssl.c Func: on_underlying_io_open_complete Line: 760 Invalid tlsio_state. Expected state is TLSIO_STATE_OPENING_UNDERLYING_IO.
Error: File: /usr/sdk/src/c/c-utility/adapters/tlsio_openssl.c Func: tlsio_openssl_open Line: 1258 Failed opening the underlying I / O.
Error: File: /usr/sdk/src/c/umqtt/src/mqtt_client.c Func: mqtt_client_connect Line: 1000 Error: io_open failed
Error: File: /usr/sdk/src/c/iothub_client/src/iothubtransport_mqtt_common.c Func: SendMqttConnectMsg Line: 2122 failure connecting
You can not use a HTTP proxy with (native) MQTT, they are 2 totally separate protocols.
If you can use MQTT over WebSockets then you should be able to use a HTTP proxy as WebSockets are initially established by upgrading a HTTP connection.
If you have a SOCKS proxy available on your network, then you may be able to use that with native MQTT. The following question has hints on how to use a SOCKS proxy with Python. How can I use a SOCKS 4/5 proxy with urllib2?

Delete key failed. gpg: WARNING: unsafe ownership on homedir `/xxx/xxx_Import_tools/Keys'

The former xxx.BrokerImport is expired, and I generate a new key with the same name 'xxx.Import' and import it into remote server. But I can't delete the former one. They have same name, when I use 'xxx.Import' to encrypt, it will failed, I guess it used the former one not the new import one.
I want to delete one expired key in remote server.
Use root user to execute commands:
[root#ip-xxx xxx_ansible]#gpg --delete-key B7C1CB35
But get following error:
gpg: WARNING: unsafe ownership on homedir `/XXX/XXX_Import_tools/Keys'
I used root user to execute this, no idea why I haven't permission.
And I try:
[root#ip-xxx xxx_ansible]# sudo gpg --delete-key B7C1CB35
then get another error:
gpg: key "B7C1CB35" not found: Unknown system error
gpg: B7C1CB35: delete key failed: Unknown system error
However the public key is exist.
[root#ip-xxx xxx_ansible]# gpg --list-keys
gpg: WARNING: unsafe ownership on homedir `/xxx/xxx_Import_tools/Keys'
/xxx/xxx_Import_tools/Keys/pubring.gpg
------------------------------------------------
pub 2048R/B7C1CB35 2016-05-12 [expired: 2018-04-24]
uid xxx.Import <xxx#xxx.com>
pub 2048R/B75F015E 2018-07-23
uid xxx.Import <xxx#xxx.com>
sub 2048R/65AED995 2018-07-23
Does anyone has idea about this? Hope to get your help.
Since I have resolve this issue, I'd like to share my solution.
I want to delete the key with command directly, but due to permission deny, I delete the pubring.gpg / secring.gpg / trustdb.gpg in remote server. And After next deployment, these key will be import by ansible script. And these file will be generated.

FTPClient login method replies 530 PASS command failed from the server

FTPClient client = new FTPClient();
try
{
client.connect(currentServerHostname);
System.out.println("Connection is successful");
System.out.println("Reply String: " + client.getReplyString());
client.login(currentServerUser, currentServerPass);
System.out.println("login ok");
System.out.println("Reply String: " + client.getReplyString());
}
catch (Exception e) {
System.out.println("No connection was established");
}
This code shows error like 530 Pass command failed in the line client.login() method.
Maybe you are trying to login with a user with a RACF account but that user (maybe) don't have an UID defined in the OMVS segment.
See if this helps:
Problem(Abstract)
An FTP client user attempts to log on to a z/OS FTP server. After the correct password is entered for user verification, the following server reply is received:
530 PASS command failed
Diagnosing the problem
With the ACC and FLO DEBUG options specified for the FTP server (see the section "Documentation for FTP Server Problems" in the Technote MustGather: Collect Troubleshooting Data for FTP for the z/OS Communications Server for instructions on starting an FTP server trace), the following messages are included in the FTP server trace:
RA0786 pass: use __passwd() to verify the user
RA0809 pass: __passwd() failed - EDC5163I SAF/RACF extract error. (errno2=0x090C1C00)
RA0888 pass: The username access has been revoked
SR2910 reply: entered
SR2947 reply: --> 530 PASS command failed
In v1r12 and later, the messages appear as:
RA0862 pass: use __passwd() to verify the user
RA1100 pass: getpwnam() failed - EDC5163I SAF/RACF extract error. (errno2=0B490808)
SR3360 reply: entered
SR3397 reply: --> 530 PASS command failed
Consider adding an ACCESSERRORMSGS TRUE statement to the FTP.DATA input used by the server (typically referenced via the SYSFTPD DD in the started proc). This will cause the server to provide more information to the end user about the nature of any logon failure (besides just '530 PASS command failed'). Some sites' security policies restrict providing more information in these cases, which is ACCESSERRORMSGS defaults to FALSE.
Resolving the problem
Each user logging in to the FTP server must have a UID defined in the user's OMVS segment. In this case, the user logging in does not have a UID defined in the OMVS segment, resulting in the PASS command failure. To resolve the problem, define a UID in the user's OMVS segment.

Gulpfile, Rsync,RsyncWrapper, Rsync Exits with code 12

Im trying to use gulp to handle my rysnc task from a local dev environment to a running vagrant machine.
The gulp task is set up like this:
var rsync = require('rsyncwrapper').rsync;
var secrets = require('./secrets.json');
// ###Rsync
// Ran from gulp
gulp.task('deploy', function() {
rsync({
ssh: true,
src: './website/',
dest: secrets.servers.dev.rsyncDest,
recursive: true,
syncDest: true,
exclude: ['node_modules'],
args: ['--verbose'],
privateKey: './.vagrant/machines/default/virtualbox/private_key',
onStdout: function (data) {
console .log (data.toString ());
}
},function (error,stdout,stderr,cmd) {
if ( error ) {
// failed
console.log(error.message);
} else {
// success
console.log("folder synced!");
}
});
});
The secrets.json contains the path to my destination vagrant machine:
{
"servers": {
"dev": {
"rsyncDest": "vagrant#192.168.2.101:/opt/webiste"
}
}
}
The rest of my gulp file works without issue, a normal vagrant rsync also works to transfer the file across.
However, when I run my task deploy, I simply get: Rsync Exited with code 12.
After some googling I found that this means the protocol stream has failed but I am unsure as even where to begin trying to fix this issue.
Any help would be greatly appreciated!
I encountered the same problem. It occurred when I destroyed and rebuilt my vagrant machine, but I left the old key in ~/.ssh/known_hosts file. The error message gulp-rsync isn't helpful. If you try to login directly you'll get an error like this:
>> ssh vagrant#192.168.50.104
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
87:9b:39:02:b8:96:6b:21:01:fa:b5:42:5f:0a:0b:f7.
Please contact your system administrator.
Add correct host key in /Users/me/.ssh/known_hosts to get rid of this message.
Offending RSA key in /Users/me/.ssh/known_hosts:36
RSA host key for 192.168.50.104 has changed and you have requested strict checking.
Host key verification failed.
To fix it, edit the file ~/.ssh/known_hosts. Remove the line for the host key of your vagrant machine. In my case, it was line beginning with 192.158.50.104.
The next time you run gulp-rsync, you'll have to type yes at the prompt.

Resources