Topshelf multiple host - topshelf

Is there any way in topshelf to run multiple host in one executable?
// Create hosts
var h1 = HostFactory.New (...); var h2 = HostFactory.New (...)
// Start hosts
in one application Runner.Run (h1, h2);
Edit
Solved with threads. But not sure if it is safe...
new Thread (()=>Runner.Run (h1));
new Thread (()=>Runner.Run (h2));

From Topshelf docs:
You can only have ONE service! As of 3.x Topshelf the base product no
longer support hosting multiple services. This was done because the
code to implement was very brittle and hard to debug. We have opted
for a simpler and cleaner base product. This feature will most likely
come back in the form of an add on nuget.

Note: This is only valid for pre-3.0 version of Topshelf. In 3.0 this was removed and is being replaced with other methods of hosting multiple services.
There is no way to run multiple hosts. Starting a host blocks execution, does a whole bunch of stuff. You can register multiple logical services in a single host though.
https://github.com/Topshelf/Topshelf/wiki/Creating-a-service
return (int)HostFactory.Run(x => {
x.Service<Service1>({ ... });
x.Service<Service2>({ ... ]);
});
All logical services run under a single AppDomain. This may or may not be an issue. If you need to host them in separate AppDomains, we started working on shelving. http://topshelf-project.com/documentation/shelving/ As a warning, if you're going to start multiple logical services with the same type, make sure they have unique names when configured.

Related

Application dependencies (another apps)

We need to deploy our 4 applications (3 spring boot apps and 1 zookeper) with docker stack. As our DevOps guy told us, there is no way how to define in docker stack which application will be depending on another like in docker compose, so we as developers need to solve it in code.
Can you tell me how to do that or what is the best way? One of our applications have to be started as first because that app manage database (migration and so on). Next can start other applications when database is prepared. Any ideas? Thanks.
if you want to run all the 4 applications in one docker container, you can refer to this postRun multiple services in a container
if you want to docker compose the 4 applications, you can refer to this post startup order, it use depends_on your other app images
no matter what the way is, you must write a script to check if your first app has already finish to manage the database, you can refer wait-for-postgres.sh to learn how to use sleep in shell to repeatedly check your first app status
the more precisely way i can suggest one is for example:
put a shared static variable to false
public static boolean is_app_start = false;
when you finish to manage your database, change this value to true;
write a #RequestMapping("/is_app_start") in your controller to return this value
use curl in your shell script to check the value

existdb: identify database server

We have a number of (developer) existDb database servers, and some staging/production servers.
Each have their own configuration, that are slightly different.
We need to select which configuration to load and use in queries.
The configuration is to be stored in an XML file within the repository.
However, when syncing the content of the servers, a single burnt-in XML file is not sufficient, since it is overwritten during copying from the other server.
For this, we need the physical name of the actual database server.
The only function found, request:get-server-name that is not quite stable since a single eXist server can be accessed through a number of various (localhost, intranet or external) URLs. However, that leads to unnecessary duplication of the configuration, one for each external URL...
(Accessing some local files in the file system is not secure and not fast.)
How to get the physical name of the existDb server from XQuery?
I m sorry but I don't fully understand your question, are you talking about exist's default conf.xml or your own configuration file that you need to store in a VCS repo? Should the xquery be executed on one instance and trigger an event in all others, or just some, or...? Without some code it is difficult to see why and when something gets overwritten.
you could try console:jmx-token which does not vary depending on URL (at least it shouldn't)
Also you might find it much easier to use a docker based approach. Either with multiple instances coordinated via docker-compose or to keep the individual configs from not interfering with each other when moving from dev to staging to production https://github.com/duncdrum/exist-docker
If I understand correctly, you basically want to be able to get the hostname or the IP address of a server from XQuery. If the functions in the XQuery Request module are not doing as you wish, then another option would be to set a Java System Property when starting eXist-db. This system property could be the internal DNS name or IP of your server, for example: -Dour-server-name=server1.mydomain.com
From XQuery you could then read that Java System property using util:system-property("our-server-name").

Configuring static routes with chef.io

Currently I want to use (opscode) Chef to configure all our routes on our machines. Since I'm very lazy, I already searched on the internet for an ready-to-go cookbook but couldn't find anything. I know, that Chef has a feature to configure routes "https://docs.chef.io/resource_route.html", but this is not enough for our use-case. We have VMs in different placement zones (prod, preprod, dev) in MZ and DMZ with different gateways on each.
If I can't find a cookbook that can differentiate that, I need to write one by myself. My idea was to analyze the node-name via ruby and use a loop and the chef-route resource to create all routes.
if /_prod/ =~ Chef::Config[:node_name]
So my hope is, that somebody is already using chef to configure routes in a enterprise-size and can help me out or that the community provides me some ideas on developing the cookbook by myself
Your environment description (around chef particularly) is not really detailed, so I'll answer on how I see it:
Chef environments to locks cookbooks in the dev/QA/Prod (could be
extended to dev/dev DMZ/QA/QA DMZ/Prod/Prod DMZ , etc)
One wrapper (role) cookbook to set attributes like gateway, static routes per type of box or per group of routes you wish to set
A code cookbook containing the recipe using the attributes defined before.
Depending on the way you'll have one or many wrapper cookbooks on your node runlist. Making a change to a route (in a wrapper) will go through locking them in the corresponding environment.
For the routes management, maybe a wrapper per "zone" is the best idea if one of your zone match exactly one environment.
WARNING: This is an exemple based on my current environment and how I would do it, I do not actually use the code below.
For our infrastructure, we have 3 QA environments (too much) within the same security zone (vlan), so we need to change the routing with the apps lifecycle, it's where the locking mechanism comes handy to change part of the nodes routing and not the whole nodes in the zone.
For the cookbook (the point 3 above, let's name it 'my_routing_cookbook'), it's quite "simple"
In the attributes let's have:
default['sec']['default'] = { gw: '192.168.1.250', device: 'eth1' }
default['sec']['routes']['172.16.0.0/16'] = { gw: '192.168.1.254', device: 'eth0' }
default['sec']['routes']['10.0.0.0/8'] = { gw: '192.168.1.254', device: 'eth0' }
In the recipe:
route '0.0.0.0/0' do
gateway node['sec']['default']['gw']
device node['sec']['default']['device']
end
node['sec']['routes'].each as |r,properties|
route r do
gateway properties['gw']
device properties['device']
end
end
The default gateway could be in the route list, I just think it's easiest for non networking people to retain it's the default gateway like this.
For the point 2, each wrapper cookbook will depend on this one and set it's own attributes. Thoose cookbooks will have a default.rb just calling include_recipe 'my_routing_cookbook'
Hope it will help you getting started.

Multiple iDempiere instances in one server

I need to install multiple iDempiere instances in one server. The customized packages are different in build and the db they are using. Is there any way to deploy both of it in one server and access like localhost:8080/client1, localhost:8080/client2 . Any help appreciated.
When I want to reference several application servers I need to copy the path of various installations
and change the database name and port of each application :
/opt/idempiere-server-production/ (on port 8080 for example) for production
And
/opt/idempiere-server-test/ (on port 8081 for example) for test
the way you said is not possible, because the idempiere server for webapp is known as
http://hostname:port/webui
Running multiple instances of idempiere on a single server is not too difficult.
Here is what you need to take care of:
Install the instances into different directories. The instances do not need to share any common files. So you are just fine making a full installation for each instance.
Make sure each instance uses its own data base. Use different names for the instance data bases.
Make sure the idempiere server instances use different tcp ports.
If you really should need to use a single port to access all of the instances you could use a http server like apache or ngnix to do define virtual hosts. Proxying or use of rewrite rules will then allow you to do the desired redirections. (I am using subdomains and apache mod_proxy to do the job)
There is another benefit to using subdomains for browser access: If all your server instances use the same host name the client browser will sometimes not be able to keep cookies from different instances apart, which can lead to a blocked session as discussed here in the idempiere google group.
Use different DB user names. The docs advise not to change the default user name Adempiere and this is ok for a single instance installation. Still if you use a single DB user for all of your instances you will run into trouble once you need to restore a database from a backup file. The RUN_DBRestore.sh will delete and recreate the DB user which is not possible when the user owns more than one DB.
You can run all of your instances as services in parallel. Before the installation of another instance rename the service script: sudo mv /etc/init.d/idempiere /etc/init.d/idempiere-theInstance. Of course you will need to do some book keeping work wth the service controller of your OS to ensure that the renamed services are started as desired.
The service controller talks to the iDempiere server via the OSGI console. For this to work without problems in a multi instance environment you need to assign a different telnet port number to each of the instances: in the editor of your choice open the file /etc/init.d/iDempiere. Find the line export TELNET_PORT=12612 and change the port number to something else.
Please Note:
OS specific descriptions in this guide are for Ubuntu 16/18 or Debian, if on another OS you need to do some research.
I have been using the described approach to host idempiere versions 5 and 6 for some time now and did not have any problems so far. Still make sure you do your own thorough tests if you want to go that route.
If you run into any problems (and maybe even manage to solve them) please report back to the community. (by giving your own answer to this question or by posting to the idempiere google group) Thanks!
You can have as many setups on your server as you like. When you run the setup to create your properties, simply chose other web ports for each installation. You also may need to slightly change the webservers configuration if they have some default ports.

How do I get ruby to honor a local hosts file?

I have an rspec testsuite that I use to test our internal and public facing API. Usually all I have to do to test the service is setup my parameters (e.g test urls) and from there the tests connect to the required service and do their thing.
My question is, how to I get ruby to honor my host file entries? In this specific scenario I'm trying to hit our pre-live servers, which use the same urls as our live environment, but obviously are on an entirely different IP cluster.
Unless you are doing some very low-level stuff, Ruby will not perform DNS name resolution by itself, it will simply call the appropriate OS API. So, you need to figure out how to configure your operating system to use a local hosts file.

Resources