I've an application running on a dev server and connecting to a dev-db hosting an oracle instance.
Now i'm deploying the on a prod/prod-db machine
Since the dev-db url is hardcoded inside the java code, the just-copied binaries still points to dev-db. As a quick warkaround i added a line in Windows Host file on prod so that dev-db now points to prod-db IP address. It's work, but i'm not very satisfied of this global-scope solution.
I was wondering if exits a way to make a hosts file "private" for a certain environments ie. only valid in the scope of my running application
No, there's no way to do this, and it's a bad approach anyway.
You should instead fix the real problem, which is the hard-coding of the address inside your java code. Put such things in a properties file, and use a different properties file for production.
Related
We have a number of (developer) existDb database servers, and some staging/production servers.
Each have their own configuration, that are slightly different.
We need to select which configuration to load and use in queries.
The configuration is to be stored in an XML file within the repository.
However, when syncing the content of the servers, a single burnt-in XML file is not sufficient, since it is overwritten during copying from the other server.
For this, we need the physical name of the actual database server.
The only function found, request:get-server-name that is not quite stable since a single eXist server can be accessed through a number of various (localhost, intranet or external) URLs. However, that leads to unnecessary duplication of the configuration, one for each external URL...
(Accessing some local files in the file system is not secure and not fast.)
How to get the physical name of the existDb server from XQuery?
I m sorry but I don't fully understand your question, are you talking about exist's default conf.xml or your own configuration file that you need to store in a VCS repo? Should the xquery be executed on one instance and trigger an event in all others, or just some, or...? Without some code it is difficult to see why and when something gets overwritten.
you could try console:jmx-token which does not vary depending on URL (at least it shouldn't)
Also you might find it much easier to use a docker based approach. Either with multiple instances coordinated via docker-compose or to keep the individual configs from not interfering with each other when moving from dev to staging to production https://github.com/duncdrum/exist-docker
If I understand correctly, you basically want to be able to get the hostname or the IP address of a server from XQuery. If the functions in the XQuery Request module are not doing as you wish, then another option would be to set a Java System Property when starting eXist-db. This system property could be the internal DNS name or IP of your server, for example: -Dour-server-name=server1.mydomain.com
From XQuery you could then read that Java System property using util:system-property("our-server-name").
The scenario is as follows:
I have TeamCity set up to use AWS EC2 hosts running Windows Server 2012 R2 as build agents. In this configuration, the TeamCity agent service is running as SYSTEM. I am trying to implement FastBuild as our new compilation process. In order to use the distributed compilation functionality of FastBuild, the build agent host needs to have access to a shared network folder. Unfortunately, I cannot seem to give this kind of access from one machine to another.
To help further the explanation, I'll use named examples. The networked folder, C:\Shared-Folder, lives on a host named Central-Host. The build agent lives on Builder-Host. Everything is running Windows Server 2012 R2 on EC2 hosts that are fully network permissive to each other via AWS security groups. What I need is to share a directory from Central-Host so that Builder-Host can fully access it via a directory structure like this:
\\Central-Host\Shared-Folder
By RDPing into both hosts using the default Administrator account, I can very easily set up the network sharing and browse (while on Builder-Host) to the \\Central-Host\Shared-Folder location. I can also open up the command line and run:
type NUL > \\Central-Host\Shared-Folder\Empty.txt
with the result of an empty text file being created at that networked location.
The problem arises from the SYSTEM account. When I grab PSTOOLS and use the command:
PSEXEC -i -s cmd.exe
I can test commands that will be given by TeamCity. Again, it is a service being run as SYSTEM which, I need to emphasize, cannot be changed to a normal User due to other issues we have when using TeamCity agents under the User account type.
After much searching I have discovered how to set up Active Directory services so that I can add Users and Computers from the domain but after doing so, I still face access denied errors. I am probably missing something important and I hope someone here can help. I believe this problem will be considered "solved" when I can successfully run the "type NUL" command shown above.
This is not an answer for the permissions issue, but rather a way to avoid it. (Wanted to add this as a comment, but StackOverflow won't let me - weird.)
The shared network drive is used only for the remote worker discovery. If you have a fixed list of workers, instead of using the worker discovery, you can specify them explicitly in your config file as follows:
Settings
{
.Workers =
{
'hostname1' // specify hostname
'hostname2'
'192.168.0.10' // or ip
}
... // the other stuff that goes here
This functionality is not documented, as to-date all users have wanted the automatic worker discovery. It is fine to use however, and if it is indeed useful, it can be elevated to a supported feature with just a documentation update.
I need to install multiple iDempiere instances in one server. The customized packages are different in build and the db they are using. Is there any way to deploy both of it in one server and access like localhost:8080/client1, localhost:8080/client2 . Any help appreciated.
When I want to reference several application servers I need to copy the path of various installations
and change the database name and port of each application :
/opt/idempiere-server-production/ (on port 8080 for example) for production
And
/opt/idempiere-server-test/ (on port 8081 for example) for test
the way you said is not possible, because the idempiere server for webapp is known as
http://hostname:port/webui
Running multiple instances of idempiere on a single server is not too difficult.
Here is what you need to take care of:
Install the instances into different directories. The instances do not need to share any common files. So you are just fine making a full installation for each instance.
Make sure each instance uses its own data base. Use different names for the instance data bases.
Make sure the idempiere server instances use different tcp ports.
If you really should need to use a single port to access all of the instances you could use a http server like apache or ngnix to do define virtual hosts. Proxying or use of rewrite rules will then allow you to do the desired redirections. (I am using subdomains and apache mod_proxy to do the job)
There is another benefit to using subdomains for browser access: If all your server instances use the same host name the client browser will sometimes not be able to keep cookies from different instances apart, which can lead to a blocked session as discussed here in the idempiere google group.
Use different DB user names. The docs advise not to change the default user name Adempiere and this is ok for a single instance installation. Still if you use a single DB user for all of your instances you will run into trouble once you need to restore a database from a backup file. The RUN_DBRestore.sh will delete and recreate the DB user which is not possible when the user owns more than one DB.
You can run all of your instances as services in parallel. Before the installation of another instance rename the service script: sudo mv /etc/init.d/idempiere /etc/init.d/idempiere-theInstance. Of course you will need to do some book keeping work wth the service controller of your OS to ensure that the renamed services are started as desired.
The service controller talks to the iDempiere server via the OSGI console. For this to work without problems in a multi instance environment you need to assign a different telnet port number to each of the instances: in the editor of your choice open the file /etc/init.d/iDempiere. Find the line export TELNET_PORT=12612 and change the port number to something else.
Please Note:
OS specific descriptions in this guide are for Ubuntu 16/18 or Debian, if on another OS you need to do some research.
I have been using the described approach to host idempiere versions 5 and 6 for some time now and did not have any problems so far. Still make sure you do your own thorough tests if you want to go that route.
If you run into any problems (and maybe even manage to solve them) please report back to the community. (by giving your own answer to this question or by posting to the idempiere google group) Thanks!
You can have as many setups on your server as you like. When you run the setup to create your properties, simply chose other web ports for each installation. You also may need to slightly change the webservers configuration if they have some default ports.
Coming from a MAMP Pro background, I loved the ability to have a "base" folder (/Sites in this case), have all of my projects underneath it and set custom server names/aliases with it. With Vagrant, it looks like I can accomplish the name/alias part with vagrant-hostsupdater, but if I really did just want to have the Vagrant files in /Sites and then all of them use the same config, what's the best way to specify a subfolder disk location with those custom host names?
I'm most likely over-thinking this, have just been a sucker for GUI interfaces and would love to know how to accomplish this. Thanks as always!
Clarification
What I'm used to
I used to use MAMP Pro, which allows you to setup custom host additions with their GUI interface. So, within my ~/Sites directory, I have several different projects going on, all in subfolders. The screen shot below shows how I can set a server name and specify a disk location, all from this central location.
What I'd like to do with Vagrant
Now I do know of (and used vagrant-hostsupdater), but what I was wondering is if I can set my Vagrant file in my ~/Sites directory (which is kind of like the root of the server; since all of my projects require the same setup) and then have individual host names setup for each project - so instead of having to access a subfolder like local.dev/project-1 or local.dev/project-2 I could setup server names such aslocal.project-1.comandlocal.project-2.com` from within that top-level Vagrant file and specify the subfolder it should attach that rewrite rule too.
The reason I'd like to do this is so I only have to run one vagrant up and I can then access all of my projects from one Vagrant instance as well as only keep track of one Vagrant file. Thanks!
You need to tell vagrant what hostnames you would like to use.
Directory based hostnames
Assuming you set you would like to set your hostnames based on the directory name; you can get all of the hostnames with ruby and pass them to the hostsupdater configuration.
SITES_DIR = "~/Sites"
config.hostsupdater.aliases = Dir["#{SITES_DIR}/*/"].map { |d| d.chomp('/') }
Configuration based hostnames
Alternatively you can mock up some sort of configuration that is desirable to you and what you are trying to do and evaluate/process it in ruby within the Vagrantfile.
I have an rspec testsuite that I use to test our internal and public facing API. Usually all I have to do to test the service is setup my parameters (e.g test urls) and from there the tests connect to the required service and do their thing.
My question is, how to I get ruby to honor my host file entries? In this specific scenario I'm trying to hit our pre-live servers, which use the same urls as our live environment, but obviously are on an entirely different IP cluster.
Unless you are doing some very low-level stuff, Ruby will not perform DNS name resolution by itself, it will simply call the appropriate OS API. So, you need to figure out how to configure your operating system to use a local hosts file.