Switch populated on ODL web topology cannot be removed - opendaylight

I'm running ODL(beryllium) and test openflow environment using mininet.
Everything is OK until I clean up everything from mininet as the switch on the web topology is still there.
I've tried using mn -c but the switch remain in the topology view.
I know there is a command to force remove and cleanup topology from the ODL terminal, but I can't find it.
Does anyone know the command so that I can remove it and clean my ODL?
Screenshot:

run "ovs-vsctl show" on your mininet system to see if any extra ovs instance(s)
are running. the web GUI you are looking at should just be a reflection of the
operational store of ODL, and the operational store should just be a reflection
of any switches that are still connected to the openflow plugin of ODL.
I dont know of any ODL command that will clean up mininet though.

Try mn -c as super user it will solve this problem.
sudo mn -c

Related

Erlang remote shell debugging not working

I'm trying to debug Erlang node started on remote PC, from my local PC.
For debugging I'm using latest IntelliJ Idea with Erlang plugin.
Remote node started like this:
erl -pa /path/to/myapp/ebin -name myapp#myremote.host -setcookie mycookie -shell -eval "application:start(myapp)."
Idea uses Rebar "Erlang Remote Node" configuration , so local node started and connected to myapp#myremote.host.
I can confirm connection, because "nodes()." on remote shell show my connected node from local machine. Also net_adm ping/pong works.
"epmd -names" also show correspond sessions.
Unfortunately all my breakpoints within IDE not triggering, so I can't stop execution and perform step-by-step debugging via IDE.
Meantime such debug session works like a charm in case of both nodes started on local PC.
Please suggest me what I'm doing wrong. Many thanks in advance .
PS: I'm also tried with short node names, with same result.
You should setup epmd for listening on external IP (http://erlang.org/doc/man/epmd.html) and after that DNS name "myremote.host" should be resolve to it IP

VM not restarting automatically when installed with kickstart with --noautoconsole option

I created a Centos 7.3 VM using kickstart using the following command:
virt-install --name=vm1 --disk path=vm1.img,size=20 --vcpus=2 --ram=10240 --os-type=linux --os-variant=rhel7.0 --network bridge=br0 --graphics none --location=http://<IP>/centos7.3 -x "ks=http://<IP>/centos73vm-ks.cfg append ip=<VM IP> netmask=255.255.252.0 gateway=<gw> bootproto=static console=ttyS0"
This works fine. VM is created, rebooted automatically and the node is usable. However, the problem with this is that I cannot use it to automate since I don't get the control back. To do that, I added the --noautoconsole options of the virt-install command at the end of the above command.
After doing so, VM is installed, but after reboot it does not come up automatically. It remains in shut off state. I need to start it manually. There are no errors on logging to the console. May someone give any leads on how to fix this?
Any help would be greatly appreciated. Thanks in advance.
you need to add --wait=-1 so that virt-install waits for the installation to complete before exiting. The vm will then automatically start when the installation completes.
this sure sounds like an issue that was covered on the RedHat customer portal. I'm not sure if that requires a paid license but your company (or you) might have one already?
-- Jonas

how to start h2o flow after reboot laptop?

I have follow the instruction to install flow
https://www.youtube.com/watch?v=_HVx9Jqr34Q
it works perfectly. But if I restart my laptop, then i open "http://localhost:54321" it shows not connected.
Should I rerun the command "java -jar h2o.jar"? is that alway required if I want to open flow after computer reboot? is that an easy short cut to start flow?
Yes, you need to re-run java -jar h2o.jar after rebooting. Alternatively, you could have your OS start it, by running that command, during the boot process; the instructions for that vary by OS (and are outside the scope of StackOverflow, but are easy to google).

Smartfoxserver 2X linux 64 running on EC2 via dotcloud - how to install?

I am currently trying to deploy smartfoxserver 2X on EC2 using dotcloud. I have been able to detect the private ip of the amazon web instance, and using the dotcloud tools I have been able to determine the correct port. However, I have difficulty installing the server proper via the command line so that I can log into it using the AdminTool.
My postinstall is fairly straightforward:
./SFS2X/sfs2x-service start-launchd
I find that on 'dotcloud push' there is a fair amount of promising output in my cygwin terminal, but the push hangs after saying that the sfs2x-service has been launched correctly, until timeout.
Consequently, my question is, has anyone found a way to install SFS2X on EC2 via dotcloud successfully? I managed to have partial success with SFS Pro, with a complete push to dotcloud, by calling ./jre/bin/java -jar installer.jar in my postinstall. Do I need to do extra legwork and build an installer jar for SFS2X? Is there a way that would be best to do this?
I do understand that there is a standard approach to deployment with SFS2X using RightScale on EC2, however I am interested in deployment using the dotcloud platform.
Thanks in advance.
The reason why it is hanging is because you are trying to start your process in the postinstall, and this is not the correct place to do that. The postinstall script is suppose to finish, if it doesn't the deployment will time out, and then get cancelled.
Once the postinstall script is finished, it will then finish the rest of your deployment.
See this page for more information about dotCloud postinstall script:
http://docs.dotcloud.com/0.9/guides/hooks/#post-install
Pay attention to this warning at the end.
Warning:
If your post-install script returns an error (non-zero exit code), or if it runs for more than 10 minutes, the platform will consider that your build has failed, and the new version of your code will not be deployed.
Instead of putting this in the postinstall script, you should add it as a background process, so that it starts up once the deployment process is complete.
See this page for more information on adding background processes to dotCloud services:
http://docs.dotcloud.com/0.9/guides/daemons/
TL;DR: You need to create a supervisord.conf file, and add it to the root of your project, and add your service to that.
Example (you will need to change to fit your situation):
[program:smartfoxserver]
command = /home/dotcloud/current/SFS2X/sfs2x-service start-launchd
Also, make sure you have the correct dotCloud service specified in your dotcloud.yml in order to have the correct binary and libraries installed for what your smartfoxserver application.

Installing Membase from source

I am trying to build and install membase from source tarball. The steps I followed are:
Un-archive the tar membase-server_src-1.7.1.1.tar.gz
Issue make (from within the untarred folder)
Once done, I enter into directory install/bin and invoke the script membase-server.
This starts up the server with a message:
The maximum number of open files for the membase user is set too low.
It must be at least 10240. Normally this can be increased by adding
the following lines to /etc/security/limits.conf:
Tried updating limits.conf as suggested, but no luck it continues to pop up the same message and continues booting
Given that the server is started I tried accessing memcached over port 11211, but I get a connection refused message. Then figured out (netstat) that memcached is listening to 11210 and tried telneting to port 11210, unfortunately the connection is closed as soon as I issue the following commands
stats
set myvar 0 0 5
Note: I am not getting any output from the commands above {Yes: stats did not show anything but still I issued set.}
Could somebody help me build and install membase from source? Also why is memcached listening to 11210 instead of 11211?
It would be great if somebody could also give me a step-by-step guide which I can follow to build from source from Git repository (I have not used autoconf earlier).
P.S: I have tried installing from binaries (debian package) on the same machines and I am able to successfully install and telnet. Hence not sure why is build from source not working.
You can increase the number of file descriptors on your machine by using the ulimit command. Try doing (you might need to use sudo as well):
ulimit -n 10240
I personally have this set in my .bash_rc so that whenever I start my terminal it is always set for me.
Also, memcached listens on port 11210 by default for Membase. This is done because Moxi, the memcached proxy server, listens on port 11211. I'm also pretty sure that the memcached version used for Membase only listens for the binary protocol so you won't be able to successfully telnet to 11210 and have commands work correctly. Telneting to 11211 (moxi) should work though.

Resources