Starting Realm Object server on AWS stalls - amazon-ec2

I've been trying to use Realm Object Server deployed on an Amazon ec2 instance, using the basic Amazon Ubuntu AMI (since the Realm AMI has ROS v.1.8.3).
To use the latest ROS (v2.x) I followed Realm's instructions to use curl -s https://raw.githubusercontent.com/realm/realm-object-server/master/install.sh | bash which appears to execute successfully. I follow that script's instructions to load nvm and use its latest version.
Then I run ros start. Here's what I get:
info: Loaded feature token capabilities=[Sync], expires=Wed Apr 19 2017 14:15:29 GMT+0000 (UTC)
info: Realm Object Server version 2.0.18 is starting
info: [sync] Realm sync server started ([realm-core-4.0.3], [realm-sync-2.1.4])
info: [sync] Directory holding persistent state: /home/ubuntu/data/sync/user_data
info: [sync] Operating mode: master_with_no_slave
info: [sync] Log level: info
info: [sync] Download log compaction is enabled
info: [sync] Max download size: 131072 bytes
info: [sync] Listening on 127.0.0.1:40134 (sync protocol version 22)
info: [http] 127.0.0.1 - GET /realms/files/%2F__wildcardpermissions HTTP/1.1 200 55 - 56.996 ms
info: [http] 127.0.0.1 - GET /realms/files/%2F__password HTTP/1.1 200 44 - 53.009 ms
info: [http] 127.0.0.1 - GET /realms/files/%2F__perm HTTP/1.1 200 40 - 9.402 ms
info: Autocreated admin user: realm-admin
info: Realm Object Server has started and is listening on http://0.0.0.0:9080
info: [http] 127.0.0.1 - GET /realms/files/%2F__admin HTTP/1.1 200 41 - 4.187 ms
info: [http] 127.0.0.1 - GET /realms/files/%2F__admin HTTP/1.1 200 41 - 29.902 ms
And then...nothing. It doesn't even get me back to my ubuntu#ip-XXX-XX-XX-XX: prompt. (It's possible that this is exactly what you'd expect but I'm pretty new to these kind of processes).
When I try to access my server in the browser (my DNS:9080) the browser says Cannot GET / and the CLI says info: [http] 96.2xx.xxx.xxx - GET / HTTP/1.1 404 139 - 0.521 ms
The security groups for my ec2 instance are:
HTTP / TCP / 80 / 0.0.0.0/0
SSH / TCP / 22 / 0.0.0.0/0
Custom UDP Rule / UDP / 9080 / 0.0.0.0/0
Custom TCP Rule / TCP / 9080 / 0.0.0.0/0
I'm stuck. What am I doing wrong? Thanks for your help.

The web based dashboard was part of ROS 1.x, but was replaced by Realm Studio in ROS 2.0.

Related

Anthos on VMWare deploy seesaw, health check in error 403 Forbidden

We are installing Anthos on VMWare platform and now we have an error in the Admin Cluster deployment procedure of the Seesaw Loadbalancer in HA.
The Deploy of two Seesaw VMs has been created with success, but when checking the health check we get the following error 403:
ubuntu#anth-mgt-wksadmin:~$ gkectl create loadbalancer --config admin-cluster.yaml -v5
Reading config with version "v1"
- Validation Category: OS Images
- [SUCCESS] Admin cluster OS images exist
- Validation Category: Admin Cluster VCenter
- [SUCCESS] Credentials
- [SUCCESS] DRS enabled
- [SUCCESS] Hosts for AntiAffinityGroups
- [SUCCESS] vCenter Version
- [SUCCESS] ESXi Version
- [SUCCESS] Datacenter
- [SUCCESS] Datastore
- [SUCCESS] Resource pool
- [SUCCESS] Folder
- [SUCCESS] Network
- Validation Category: Bundled LB
- [FAILURE] Seesaw validation: admin cluster lb health check failed: LB "10.25.94.229" is not healthy: received 403 Forbidden
- Validation Category: Network Configuration
- [SUCCESS] CIDR, VIP and static IP (availability and overlapping)
- Validation Category: GCP
- [SUCCESS] GCP service
- [SUCCESS] GCP service account
Some validation results were FAILURE or UNKNOWN. Check report above.
Preflight check failed with preflight check failed
Exit with error:
also this simple test give the same result
root#jump-mgm-wks:~# wget http://10.25.94.229
--2021-07-27 13:56:04-- http://10.25.94.229/
Connecting to 10.173.119.123:8080... connected.
Proxy request sent, awaiting response... 403 Forbidden
2021-07-27 13:56:04 ERROR 403: Forbidden.
We get also this error on log:
ubuntu#anth-mgt-bigip1:/var/log/seesaw$ cat seesaw_ha.anth-mgt-bigip1.root.log.ERROR.20210727-123208.1738
Log file created at: 2021/07/27 12:32:08
Running on machine: anth-mgt-bigip1
Binary: Built with gc go1.15.11 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0727 12:32:08.331013 1738 main.go:86] config: Failed to retrieve Config: HAConfig: Dial failed: dial unix /var/run/seesaw/engine/engine.sock: connect: no such file or directory
Solved after the recreation of the admin workstation with the following parameter.
gkectl delete loadbalancer --config admin-cluster.yaml --seesaw-group-file seesaw-for-gke-admin.yaml
now save the following files from ubuntu home director of the admin workstation to the jump-mgm-wks in /backup
amin-cluster.yaml
admin-cluster-ipblock.yaml
admin-seesaw-ipblock.yaml
gkeadm delete admin-workstation
gkeadm create admin-workstation --auto-create-service-accounts
gkectl create loadbalancer --config admin-cluster.yaml

Error starting distributed load test in non gui mode with JMETER

I get the below error in the master machine while running distributed load test in non gui mode with JMETER. How can I resolve this.
Message in master
C:\apache-jmeter-5.4.1\bin>jmeter -Djava.rmi.server.hostname=xx.xx.xx.xx -n -t C:\apache-jmeter-5.4.1\bin\examples\masterslavetest.jmx -l C:\apache-jmeter-5.4.1\bin\examples\result.jtl -R xx.xx.xx.xx
Creating summariser <summary>
Created the tree successfully using C:\apache-jmeter-5.4.1\bin\examples\masterslavetest.jmx
Configuring remote engine: xx.xx.xx.xx
Using local port: 4000
Starting distributed test with remote engines: [xx.xx.xx.xx] # Thu Mar 04 18:53:43 GMT 2021 (1614884023471)
Error in rconfigure() method java.rmi.MarshalException: error marshalling arguments; nested exception is:
java.io.NotSerializableException: org.apache.jmeter.JMeter$ListenToTest
Remote engines have been started:[]
The following remote engines have not started:[xx.xx.xx.xx]
Waiting for possible Shutdown/StopTestNow/HeapDump/ThreadDump message on port 4445
Message in slave
Using local port: 4000
Created remote object: UnicastServerRef2 [liveRef: [endpoint:[xx.xx.xx.xx:4000,SSLRMIServerSocketFactory(host=lhr4-pegajm-03/xx.xx.xx.xx, keyStoreLocation=rmi_keystore.jks, type=JKS, trustStoreLocation=rmi_keystore.jks, type=JKS, alias=rmi),SSLRMIClientSocketFactory(keyStoreLocation=rmi_keystore.jks, type=JKS, trustStoreLocation=rmi_keystore.jks, type=JKS, alias=rmi)](local),objID:[79aa42b8:177fe8fb2b5:-7fff, 5964228045381296735]]]
Below are few additional details.
Jmeter : 5.4.1
Java : 15
Running the tests on windows 10 vms
Opened server.rmi.localport, client.rmi.localport, server.rmi.port
Slave doesn't show any logs

AWS server port 25 blocked after few hours of server creation

I am running a EC2 large.t3 instance with CentOs.
My server Region is Ohio.
I have installed WHM and it is working fine.
When I created a server PORT 25 working fine. But after few hours port 25 gets blocked. Due to this I'm not ablw to send any email from my webmails.
Here is the snippet..
Just after server setup:
[root#ip-172-31-43-92 ~]# telnet
aspmx.l.google.com 25
Trying 172.217.214.26...
Connected to aspmx.l.google.com .
Escape character is '^]'.
220 mx.google.com ESMTP
n18si6406443jao.103 - gsmtp
quit
221 2.0.0 closing connection
n18si6406443jao.103 - gsmtp
Connection closed by foreign host.
[root#ip-172-31-43-92 ~]#`
After few hours and restart:
[root#ip-172-31-43-92 home]# telnet
aspmx.l.google.com 25
Trying 172.217.212.26...
Amazon EC2 throttles traffic on port 25 of all EC2 instances by default, but you can request for this throttle to be removed.
This may help you from the AWS website:
Remove The Port 25 Throttle From Your EC2 Instance

How to navigate to an external url provided by a yo generator's server

I'm using a yo generator (generator-moda), running on an ec2 instance, and want to navigate from my browser to the external url provided but my browser just hangs on connecting...
Are there special config adjustments that need to be done in ec2 security groups or otherwise allow the ip or host below?
[BS] Access URLs:
-------------------------------------
Local: http://localhost:3000
External: http://172.31.60.85:3000
-------------------------------------
UI: http://localhost:3001
UI External: http://172.31.60.85:3001
-------------------------------------
[BS] Serving files from: ./app
[17:52:19] gulp-inject 12 files into main.scss.
[17:52:19] gulp-inject 12 files into main.scss.
[17:52:19] Starting 'html'...
[17:52:19] Finished 'html' after 3.89 ms
[BS] 1 file changed (index.html)
INFO [karma]: Karma v0.12.31 server started at http://localhost:9876/
INFO [launcher]: Starting browser PhantomJS
WARN [watcher]: Pattern "/home/ubuntu/dev/clients/alugha/main/app/scripts/**/*.html" does not match any file.
INFO [PhantomJS 1.9.8 (Linux)]: Connected on socket f08K4dCRmBorILmZgofR with id 91726259
The problem is that 172.31.0.0/16 is an Amazon's private range of IPs, so you cannot access to them outside the VPC (Amazon Virtual Private Cloud) source.
If you want to connect to your EC2 instance where your code is running you need to do two things:
Connect to the public DNS hostname / IP that you can get from your EC2 console. You have the instructions here: Determining Your Public, Private, and Elastic IP Addresses - AWS docs
Open the port in the security group to allow you to connect to your instance. In this answer is explained how to open a port for your security group, but instead of port 80, open 3000 and 3001.
Then in your browser copy the public DNS hostname you got on the first step with the correct port and you should be able to load your page.

https is not working behind haproxy

I have to put haproxy in front of my already running Apache web-server. Both haproxy and apache web-server are on separate Cent-OS6.4 machines.
I had installed haproxy-1.5-dev19.el6.x86_64 and it is working fine with http, but getting
below error with https:-
"502 Bad Gateway: The server returned an invalid or incomplete response".
haproxy logs are shown below:
Nov 7 05:49:56 localhost haproxy[9925]: XX.XX.XXX.XX:51949
[07/Nov/2013:05:49:55.204] https-in~ abc-https/server1
1595/0/1/-1/1597 502 714 - - PHNN 2/2/0/0/0 0/0 "GET / HTTP/1.1"
Nov 7 05:49:57 localhost haproxy[9925]: XX.XX.XXX.XX:51947
[07/Nov/2013:05:49:55.972] https-in~ abc-https/server1
1523/0/1/-1/1525 502 714 - - PHNN 1/1/0/0/0 0/0 "GET /favicon.ico HTTP/1.1"
SSL logs on webserver (request behind proxy):
10.0.0.218 - - [06/Nov/2013:22:42:34 -0800] **"GET /"** 400 510
10.0.0.218 - - [06/Nov/2013:22:42:34 -0800] "GET /" 400 510
SSL logs on webserver (direct request):
XX.XX.XX.XX - - [06/Nov/2013:22:48:42 -0800] **"GET / HTTP/1.1"** 200 19553
As you can see the difference between proxy and without proxy at webserver.
Below is my haproxy.cfg file:
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 40000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor
option redispatch
retries 10
timeout http-request 60s
timeout queue 60s
timeout connect 60s
timeout client 60s
timeout server 60s
timeout http-keep-alive 60s
timeout check 60s
maxconn 30000
Listen stats 0.0.0.:8880
stats enable
stats hide-version
stats uri /
Stats realm HAProxy\ Statistics
stats auth XXXXX:XXXXX
frontend http-in
bind *:80
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
tcp-request connection reject if { src_conn_cur ge 200 } tcp-request
connection track-sc1 src
use_backend http-in-static if url_static
default_backend http-in-bk
frontend https-in
bind *:443 ssl crt /home/ec2-user/ev/haproxy.pem
http-request add-header X-Proto https if { ssl_fc }
use_backend abc-https if {ssl_fc}
backend abc-https
server server1 10.0.0.16:443 check
backend http-in-static
server static 10.0.0.16:80 check inter 100 weight 1
backend http-in-bk
acl abuse src_http_err_rate(http-in) ge 100
acl flag_abuser src_inc_gpc0(http-in)
tcp-request content reject if abuse flag_abuser
server server1 10.0.0.16:80 check inter 100 weight 1
There is only one webserver which is already running and I have to implement haproxy in front of that.
Where I am doing wrong? Kindly help me to resolve this issue.
Regards,
Komal Pal
You are decrypting the SSL traffic and then sending the plaintext HTTP to an HTTPS socket on your webserver.
In this setup you would normally send it to port 80 on the webserver, because you have already decrypted it.
If you want to re-encrypt you must change your "server xxx" line to have the flag "ssl" on it as well.

Resources