I have 1 ECS cluster running 1 EC2 instance. I start a APM server (Signoz) on EC2 (ssh into EC2 and install using command line). I have a program running as ECS task. I try to connect this program to APM server using localhost. But it fail with connection timeout. (ECS cluster and task are all run on EC2 type)
How to connect from ECS tasks to service run on EC2? I thought they are on the same instance?
Related
Infinispan server is not going into cluster when started on different host machine.
We have created two EC2 instances. Both are using the same subnet and same security group.
We have executed the following command from Jenkins on both EC2 instances,
cd infinispan-server-10.1.8.Final
JENKINS_NODE_COOKIE=dontKillMe nohup bash bin/server.sh >> server.log 2>&1 &
Verifying Cluster Membership on each server. but it shows itself only, not the other.
We want them in cluster
I've followed the AWS DocumentDB docs for connecting outside VPC:
I created an EC2 instance in the same security group and VPC as the DocDB cluster
In the security group I opened 22 port access for my IP, and also opened port 27017 for communication inside the security so EC2 instance can SSH tunnel to the DocDB
I ran ssh -f -i "ssh-tunneling-access.pem" -L 27017:{doc-db-cluster}:27017 {ec2-instance-user}#{ec2-instance-dns} -N to open the SSH tunnel
In another terminal I tried to connect using Mongo shell with mongosh "mongodb://{credentials}!#localhost:27017/?tls=true&tlsAllowInvalidHostnames=true&tlsCAFile=rds-combined-ca-bundle.pem"
I got an error "MongoServerSelectionError: read ECONNRESET"
I'm running on Windows 11, and my terminal is Powershell Core.
Any ideas what did I miss and/or how to troubleshoot it?
First of all, make sure you can connect to DocumentDB from the EC2 instance. The security group attached to the DocumentDB cluster has to allow port 27017 with source the EC2 instance (or the security group of the EC2).
Second, is not clear from where you're initiating the tunnel. Did you execute step 3. on the Windows 11 machine? Have you installed OpenSSH on Windows?
How about using a GUI client, like Robo 3t, which has SSH tunneling support? Instructions on how to connect can be found here.
How can I start the oauth2_proxy on my ECs instance? I finished the configuration but I can't get it to run.
I am trying to execute some bash script on EC2 instance using boto. Boto provides a way SSH to EC2 instance on public IP but in my case the instances have only private IP. The way SSH is done on these instance is using a host which can SSH on all the instance using private IP (Bastion host).
Following is the script to connect to instance on public IP:
s3_client = boto3.client('s3')
s3_client.download_file('mybucket','key/mykey.pem', '/tmp/mykey.pem')
k = paramiko.RSAKey.from_private_key_file("/tmp/mykey.pem")
c = paramiko.SSHClient()
c.set_missing_host_key_policy(paramiko.AutoAddPolicy())
host=event
print "Connecting to " + host
c.connect( hostname = host, username = "ec2-user", pkey = k )
How to connect to instances if host have private IP instead of public key if we want to connect through bastion host with public IP P.P.P.P
If your requirement is to trigger execution of some code on an Amazon EC2 instance, then it would be better to use the Amazon EC2 Run Command rather than try to automate an SSH connection.
Amazon EC2 Run Command provides a simple way of automating common administrative tasks like executing Shell scripts and commands on Linux, running PowerShell commands on Windows, installing software or patches, and more. Amazon EC2 Run Command allows you to execute these commands across multiple instances and provides visibility into the results, making it easy to manage configuration change across fleets of instances.
Your instances would need the Amazon EC2 Systems Manager (SSM) agent installed. See: Installing SSM Agent
You would then run commands on Amazon EC2 instances from the management console, AWS Command-Line Interface (CLI) or via an API call.
The send command does not accept tags as input. However, you could first perform a list-instances command to search for instances by tag, then pass the instance-ids to the send command. See: AWS CLI send-command
I've a linux instance in Amazon EC2 instance. I manually installed Spark in this instance and it's working fine. Next I wanted to set up a spark cluster in Amazon.
I ran the following command in ec2 folder:
spark-ec2 -k mykey -i mykey.pem -s 1 -t t2.micro launch mycluster
which successfully launched a master and a worker node. I can ssh into the master node using ssh -i mykey.pem ec2-user#master
I've also exported the keys: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
I've a jar file (which has a simple Spark program) which I tried to submit to the master:
spark-submit --master spark://<master-ip>:7077 --deploy-mode cluster --class com.mycompany.SimpleApp ./spark.jar
But I get the following error:
Error connecting to master (akka.tcp://sparkMaster#<master>:7077).
Cause was: akka.remote.InvalidAssociation: Invalid address: akka.tcp://sparkMaster#<master>:7077
No master is available, exiting.
I'm also updated EC2 security settings for master to accept all inbound traffic:
Type: All traffic, Protocol: All, Port Range: All, Source: 0.0.0.0/0
A common beginner mistake is to assume Spark communication follows a program to master and master to workers hierarchy whereas currently it does not.
When you run spark-submit your program attaches to a driver running locally, which communicates with the master to get an allocation of workers. The driver then communicates with the workers. You can see this kind of communications between driver (not master) and workers in a number of diagrams in this slide presentation on Spark at Stanford
It is important that the computer running spark-submit be able to communicate with all of the workers, and not simply the master. While you can start an additional EC2 instance in a security zone allowing access to the master and workers or alter the security zone to include your home PC, perhaps the easiest way is to simply log on to the master and run spark-submit, pyspark or spark-shell from the master node.