When creating ZFS pools on Linux it is recommended to avoid names like /dev/sdX, /dev/hdX because those mappings are not persistent and tend to change between restarts. So instead we use /dev/by-id or /dev/disk/by-path/
What I observed is that in AWS /dev/by-id is not populated. So what is the best approach for AWS?
We can use /dev/by-path for Zpools in AWS. /dev/by-path is populated by default. As recommended in https://github.com/zfsonlinux/zfs/wiki/faq#selecting-dev-names-when-creating-a-pool using /dev/by-path is one of the recommended approaches along with /dev/by-id
Related
Is there a way of persisting Quarkus devservices databases? Maybe a way of using volumes, but I can not find any reference. I am thinking on something like a property (non existing) quarkus.datasource.devservices.volume=some_volume that will reuse some_volume existing volume with the spin Docker container.
Maybe what you can do for now is, disable database startup from dev-services (see link 3 below) and add a QuarkusTestResource on your test class and startup your own docker image with a volume mount to your disk.
And the next time you startup your test, the data should be available as long as it points to the same volume mount. Also make sure that you don't use TestTransaction, otherwise the transaction will be rollbacked at the end of the test.
Maybe these links can help you:
cheat sheat: continious testing
cheat sheat: dev-services
dev-service guide
I have ActiveMQ Artemis cluster (2 nodes) with active-backup HA (shared-store mode), 2.17.0.
Shared-store is setup with NFS, mounted on $ARTEMIS_INSTANCE/data. In broker.xml I specified the following settings - pretty standard:
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
According to this official documentation page, there is login.conf file in etc directory which specifies users & roles files. I have the following contents:
activemq {
org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule required
debug=false
reload=true
org.apache.activemq.jaas.properties.user="artemis-users.properties"
org.apache.activemq.jaas.properties.role="artemis-roles.properties";
};
Well, everything seem to work fine with it, but I noticed that every time I want to create a new user/role, I have to create twice, in each node separately. If I have replication HA mode and 6 nodes, I would need to create the same user/role 6 times (for each node).
Am I not missing anything here?
Then I've come up with an idea to literally move artemis-users.properties and artemis-roles.properties to a $ARTEMIS_INSTANCE/data directory and modify login.conf file accordingly, so I can create user/role only once, and created entries will be accessible from other node(s):
activemq {
org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule required
debug=false
reload=true
org.apache.activemq.jaas.properties.user="../data/artemis-users.properties"
org.apache.activemq.jaas.properties.role="../data/artemis-roles.properties";
};
Since this is shared store, it kind of makes sense for me to store this way. I did quite some testing and everything seems to work fine, I do not think there are going to be any race conditions this way.
Again, am I not missing anything? Any suggestions/better workarounds?
The PropertiesLoginModule is provided by default because it is simple and straight-forward to configure for basic use-cases. However, it's not really designed for production use across a cluster. Typically you'd use an LDAP server (or some equivalent) which is a central store for all your user & role data. As the documentation states:
In general, using properties files and broker-centric user management for anything other than very basic use-cases is not recommended. The broker is designed to deal with messages. It's not in the business of managing users, although that functionality is provided at a limited level for convenience. LDAP is recommended for enterprise level production use-cases.
You are, of course, free to use the PropertiesLoginModule in more complex use-cases (e.g. like yours). I think putting the properties files on shared storage is fine and not likely to lead to problems.
I need to install multiple iDempiere instances in one server. The customized packages are different in build and the db they are using. Is there any way to deploy both of it in one server and access like localhost:8080/client1, localhost:8080/client2 . Any help appreciated.
When I want to reference several application servers I need to copy the path of various installations
and change the database name and port of each application :
/opt/idempiere-server-production/ (on port 8080 for example) for production
And
/opt/idempiere-server-test/ (on port 8081 for example) for test
the way you said is not possible, because the idempiere server for webapp is known as
http://hostname:port/webui
Running multiple instances of idempiere on a single server is not too difficult.
Here is what you need to take care of:
Install the instances into different directories. The instances do not need to share any common files. So you are just fine making a full installation for each instance.
Make sure each instance uses its own data base. Use different names for the instance data bases.
Make sure the idempiere server instances use different tcp ports.
If you really should need to use a single port to access all of the instances you could use a http server like apache or ngnix to do define virtual hosts. Proxying or use of rewrite rules will then allow you to do the desired redirections. (I am using subdomains and apache mod_proxy to do the job)
There is another benefit to using subdomains for browser access: If all your server instances use the same host name the client browser will sometimes not be able to keep cookies from different instances apart, which can lead to a blocked session as discussed here in the idempiere google group.
Use different DB user names. The docs advise not to change the default user name Adempiere and this is ok for a single instance installation. Still if you use a single DB user for all of your instances you will run into trouble once you need to restore a database from a backup file. The RUN_DBRestore.sh will delete and recreate the DB user which is not possible when the user owns more than one DB.
You can run all of your instances as services in parallel. Before the installation of another instance rename the service script: sudo mv /etc/init.d/idempiere /etc/init.d/idempiere-theInstance. Of course you will need to do some book keeping work wth the service controller of your OS to ensure that the renamed services are started as desired.
The service controller talks to the iDempiere server via the OSGI console. For this to work without problems in a multi instance environment you need to assign a different telnet port number to each of the instances: in the editor of your choice open the file /etc/init.d/iDempiere. Find the line export TELNET_PORT=12612 and change the port number to something else.
Please Note:
OS specific descriptions in this guide are for Ubuntu 16/18 or Debian, if on another OS you need to do some research.
I have been using the described approach to host idempiere versions 5 and 6 for some time now and did not have any problems so far. Still make sure you do your own thorough tests if you want to go that route.
If you run into any problems (and maybe even manage to solve them) please report back to the community. (by giving your own answer to this question or by posting to the idempiere google group) Thanks!
You can have as many setups on your server as you like. When you run the setup to create your properties, simply chose other web ports for each installation. You also may need to slightly change the webservers configuration if they have some default ports.
I'd like to specify the snapshot id which would be used to create a root device image for a EC2 instance created with cloudformation. How do I do that?
I could only find a way to make volume from a snapshot, but no way to use it in the instance.
If you want to use an EBS snapshot as the basis of the root disk (EBS volume) for an instance, you need to first register the snapshot as an AMI (e.g., using ec2-register).
Make sure to specify the correct architecture and kernel (AKI) when you register the snapshot as an AMI.
Alternatively, instead of taking a snapshot and registering it as separate steps, you could use the ec2-create-image command/API/console function to perform the snapshot and registration in a single step. This also takes care of picking the right architecture, kernel, and other parameters.
Once you have an AMI, you can tell CloudFormation to use that AMI when running a new instance.
I concur. This has nothing to do with cloudformation, but I just did this following a crippling 'do-release-upgrade'. It's just a matter of creating an image from the snapshot, and in my case making sure to change the virtualization type to "hardware assisted virtualization" (HVM). Then you can just launch the resulting image (AMI).
I would like to access external data from my aws ec2 instance.
In more detail: I would like to specify inside by user-data the name of a folder containing about 2M of binary data. When my aws instance starts up, I would like it to download the files in that folder and copy them to a specific location on the local disk. I only need to access the data once, at startup.
I don't want to store the data in S3 because, as I understand it, this would require storing my aws credentials on the instance itself, or passing them as userdata which is also a security risk. Please correct me if I am wrong here.
I am looking for a solution that is both secure and highly reliable.
which operating system do you run ?
you can use an elastic block storage. it's like a device you can mount at boot (without credentials) and you have permanent storage there.
You can also sync up instances using something like Gluster filesystem. See this thread on it.