I'm trying to send message using tibrvsend command.
The scenarios is like below ,
I've server running on network , I'm able to ping the server Ip address and able to connect using remote desktop connection.
On the server i'm listening to the subject
tibrvlisten -service 7541 -network ;239.193.1.110 MY.SUBJECT
From my local machine I'm trying to send message to the same subject above, but it is not reaching to the server. If i use the same send command on the server it is working fine and message is reaching . I'm not able send the same from my local to server.
tibrvsend -service 7541 -network 184.10.34.9;239.193.1.110 -daemon tcp:7541 MY.SUBJECT "Hello Test Message"
The error i'm getting on console is tibrvsend: Failed to initialize transport: Could not resolve network specification
Try to keep the network parameter as simple as possible, and only use the daemon parameter if you are running multiple daemons or using a remote daemon.
tibrvsend -service 7541 -network ";239.193.1.110" MY.SUBJECT "Hello Test Message"
On Windows you often have multiple network adapters which is why you sometimes need to be more specific with the first parameter. The interface address is usually the easiest form to use as you have discovered.
-network "<local ip address>;239.193.1.110" ...
On Unix platforms you can use the interface name or network name from /etc/networks but you do not have this luxury on Windows, and often IPv6 renders it unusable.
Read the RV Concepts Guide for more discussion on the network parameter.
-network "<hostname>;239.193.1.110" ...
The most useful being network IP, e.g. 1.2.3. will match interface 1.2.3.4
-network "<network IP>;239.193.1.110" ...
It's working..
tibrvsend -service 7541 -network "local-ip;239.193.1.110;server-ip" MY.SUBJECT "This is from local machine"
Related
I am getting an timeout error when trying to deploy to an VM instance hosted on AWS. Manually I can log ing using
ssh -i myKeyFile.pem myuser#IP
Once I accessed the remote machine I can execute some docker commands and everything works fine. But now that I need to automated that on the CD pipeline is where I am getting the following error:
2020-06-02T21:37:12.6877276Z Trying to establish an SSH connection to ***#IP:port
2020-06-02T21:38:52.4629461Z ##[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: Timed out while waiting for handshake.
2020-06-02T21:38:52.4685976Z ##[section]Finishing: Run shell commands on remote machine
The steps I follow to make the SSH connection are:
I created a SSH service connection on the project settings in Azure DevOps
I created the CD pipeline
I added a SSH task with the following parameters
When I manually trigger it to test if it works, the release start working fine but after 1:43 minutes more or less is when I got the error:
Then, when I review the logs, it is the same error I pasted at the beginning:
[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: Timed out while waiting for handshake
I've increase the handshake timeout settings from the default one (20000) to 90000, but no luck.
Any one has face this problem before?
Seems there is an ongoing error with the default agent pools from Azure DevOps. Lot of people have been reported this and Azure DevOps teams is working on it at the time this post is been written (I couldn't find the post where all that is details. I will add this later on).
The workaround is
To create a self-hosted agent.
After this has been created you will need to re-create your CD pipeline using the new self-hosted agent.
The rest of the SSH task configuration depends on your needs. But if you want to test the SSH connection works, just print something:
echo 'I'm connected'
After this you CD pipeline should be working fine.
More details on how to created the Self-Hosted Agent on Windows. There are also links for Linux and Mac.
I had a similar issue with a VM in Azure. It turned out I had set the security group to only allow SSH in from my local network and Azure Dev-Ops agents obviously run in a Microsoft network and were coming from a different IP Address range. The solution was to open up SSH to all source IP Addresses. You can get the list of IP address ranges Dev-Ops agents use but they appear to change every week which isn't very helpful.
See https://learn.microsoft.com/en-us/azure/devops/organizations/security/allow-list-ip-url?view=azure-devops#microsoft-hosted-agents
I have a PowerShell script which copies certain files from a remote server and pastes in the local server. Now I need to use the file from another agent server which doesn't have connectivity to the remote server.
So I need to find out the port on which the communication will be established to the remote server from the agent server to copy the data.
I tried using netstat command but didn't find anything in that.
What are the steps to be followed to get the port information?
I want to verify that my network monitoring program on Mac can handle network interfaces that come and go. For example, the user could attach a Wifi adapter via Thunderbolt, and my program must notice that.
So, I set up Python server to run in in localhost:8000. Running wget http://localhost:8000 on the command line gives me a valid response from the Python server. Direct communication with the localhost succeeds. So far so good.
Next, I wrote a Python script, setting up a software network interface, tunneling traffic from 10.0.2.1 to localhost. However, the tunnel is obviously not correctly set up because the script hangs on the wget part:
import os
try:
os.system("ifconfig gif6 create")
os.system("ifconfig gif6 inet 10.0.2.1 127.0.0.1 up")
os.system("wget http://10.0.2.1:8000")
finally:
os.system("ifconfig gif6 destroy")
What am I doing wrong when trying to set up the 10.0.2.1 <-> 127.0.0.1 tunnel? There is probably something wrong in the ifconfig commands but I'm unable to figure it out.
I setup a distributed load testing environment using JMeter in unbundu machines.
->Master: the system running JMeter GUI, control each slave.
->Slave: the system running jmeter-server, receive command from the master and send a request to server under test.
->Target: the web server under test, get request from slaves.
Basic requirements are done:
-The firewalls on the systems are turned off
-All the planned master and Slaves are in the same subnet
-The JMeter server can access the target.
-Same version of JMeter on all the systems (version 2.3.4 ).
I did the following:
1) Tried pinging form master to slave and vice versa through ubundu terminal. its happening ..
2) Added the following to client (master) jmeter.properties:
# Remote hosts and RMI configuration
remote_hosts=192.168.0.139:1099
# RMI port to be used by the server (must start rmiregistry with same port)
server_port=1099
3) Added the following to server (Slave) jmeter.properties:
# On the server(s)
set server_port=1234
start rmiregistry with port 1234
4) Now started the Jmeter engine on Master.
a) Started Jmeter on master machine (GUI)
b) Created test plan--> (added tread group , samplers and required listners)
c) Now start the Slave(s) from the GUI
-click Run at the top
-select Remote start
-select the IP address
But error popup came as :-
"Connection refused to host : 192.168.0.139; nested exception is : java.net.ConnectionException : Connection Refused"
what may be the reason for not connecting with the remote salve (say here : 192.168.0.139)
DO i need to do any more configuration in jmeter.properties file or in any other files (in both slave and master)?
I think you forgot to start the slave in "slave mode".
In command line mode, go to jmeter/bin directory and execute jmeter-server.bat
That will start the slave process and will keeps it listening for commands.
Then you can go forward, loading amd launching the script.
have a look at:
http://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.pdf
Also be aware that:
- the two systems MUST run the same Jmeter version
- the two systems MUST be on the same subnetwork
- the two systems SHOULD be as similar as possible: same OS, same directory tree, etc
- "remote_hosts" only require the address. The port is specified by "server_port" parameter.
I am running a ruby script that uses Ruby/MySQL and net/ftp. The script is running on a Windows Vista box and is trying to create a database and ftp connection to the same remote Solaris server.
Here is the gist of the code:
require 'mysql'
require 'net/ftp'
dbh = Mysql.real_connect(db["host"], db["user"], db["pass"], db["name"])
ftp = Net::FTP.new(ftp["host"])
Now, if I run the script from the Vista box that it resides on everything works as it should. However, the script is being called from yet another server via NRPE and that's when the error occurs.
If I set db["host"]/ftp["host"] equal to the fully qualified domain name of the remote server here is the error I receive:
getaddrinfo: no address associated with hostname.
After receiving that error I tried pinging the server from the script and sure enough it failed when trying to ping the hostname, however, it worked when I pinged the IP address.
But then if I set db["host"]/ftp["host"] to the IP address of the remote server I get this error:
The requested service provider could not be loaded or initialized. - socket(2)
I'm having a hard time finding any info on how to debug this, so if anyone has any ideas they will be greatly appreciated.
Thanks in advance.
Turns out the script was being run remotely from a different user than when it was run locally. I'm not exactly sure what about the environment changed that caused the issue, but once we set up the remote instance to run as the same user as the local everything worked fine.