I get 404 Not found | Install war project in Karaf 3 from userguide - osgi

I can install a war but not to test it , why ????
From Karaf tuto I have done :
karaf#root()> bundle:install -s "webbundle:http://tomcat.apache.org/tomcat-7.0-doc/appdev/sample/sample.war?Bundle-SymbolicName=tomcat-sample&Web-ContextPath=/sample"
Bundle ID: 150
karaf#root()> list |grep tom
150 | Active | 80 | 0 | tomcat-sample
karaf#root()> web:list
ID | State | Web-State | Level | Web-ContextPath | Name
123 | Active | Deployed | 80 | /sample | tomcat-sample (0)
When I go to [http://loxcalhost:8181/sample] it's not working , why ???

The sample war doesn't contain a welcome section in it's web.xml and therefore nothing happens if you call localhost:8181/sample you have to go for localhost:8181/sample/hello as that's the registered servlet for this web-application.
If you issue a http:list command you'll get the following listing:
karaf#root()> http:list
ID | Servlet | Servlet-Name | State | Alias | Url
---------------------------------------------------------------------------------------------------------------------------
103 | JspServletWrapper | jsp | Deployed | | [*.jsp, *.jspx, *.jspf, *.xsp, *.JSP, *.JSPX, *.JSPF, *.XSP]
103 | ResourceServlet | default | Deployed | / | [/]
103 | | HelloServlet | Deployed | | [/hello]

Related

Using Spring #Value annotation results in field not initialized error in Eclipse

I am currently developing a Spring Boot application in the Eclipse IDE with a Connection class which needs to know which data source to connect to. I decided to let it know this property from Spring's application.properties, through the #Value annotation:
#Value("${project.datasource}")
private final DataSource DATA_SOURCE;
where DataSource is an enum representing the possible data sources. However, in this method, I get a "Blank final field DATA_SOURCE may not have been initialized" error:
private DBConnection() throws SQLException {
ConnectionConfig config = new ConnectionConfig(DATA_SOURCE);
connection = DriverManager.getConnection(config.getUrl(), config.getUSERNAME(), config.getPASSWORD());
}
Inserting a default value doesn't work, either:
#Value("${project.datasource:POSTGRE_LOCAL}")
still gives the same error.
I tried to install the Spring Tools 4 plugin for Eclipse to check if this was just Eclipse not understanding the #Value annotation's implications, but it seems like this isn't the case. How do I solve this problem? Am I misunderstanding the implications myself?
application.properties:
project.datasource = POSTGRE_LOCAL
Project tree:
| .classpath
| .gitignore
| .project
| HELP.md
| mvnw
| mvnw.cmd
| pom.xml
|
+---.mvn
| \---wrapper
| maven-wrapper.jar
| maven-wrapper.properties
|
+---.settings
| org.eclipse.core.resources.prefs
| org.eclipse.jdt.core.prefs
| org.eclipse.m2e.core.prefs
| org.springframework.ide.eclipse.prefs
|
+---src
| +---main
| | +---java
| | | \---org
| | | \---ingsw21
| | | \---backend
| | | +---connection
| | | | DBConnection.java
| | | |
| | | +---controllers
| | | | UserController.java
| | | |
| | | +---DAOs
| | | | DAOUtente.java
| | | |
| | | +---DAOSQL
| | | | DAOSQLUtente.java
| | | |
| | | +---entities
| | | | Utente.java
| | | |
| | | +---enums
| | | | DataSource.java
| | | |
| | | \---exceptions
| | | BadRequestWebException.java
| | | DataAccessException.java
| | |
| | \---resources
| | application.properties
| |
| \---test
| \---java
| \---org
| \---ingsw21
| \---backend
| \---BackEnd
| BackEndApplicationTests.java
|
\---target
+---classes
| | application.properties
| |
| \---org
| \---ingsw21
| \---backend
| +---connection
| | DBConnection$ConnectionConfig.class
| | DBConnection.class
| |
| +---controllers
| | UserController.class
| |
| +---DAOs
| | DAOUtente.class
| |
| +---DAOSQL
| | DAOSQLUtente.class
| |
| +---entities
| | Utente.class
| |
| +---enums
| | DataSource.class
| |
| \---exceptions
| BadRequestWebException.class
| DataAccessException.class
|
\---test-classes
\---org
You cannot add #Value to a final field.
#Value("${project.datasource}")
private DataSource DATA_SOURCE;
should work just fine.
Reverse the "$" and "{". The expression syntax is "${...}".

How to share Hazelcast cache over multi-war Tomcats

We have multiple Tomcats, each with multiple .war files (= Spring Boot app) deployed in it.
We now need some distributed caching between app1 on tomcat1 and app1 on tomcat2. It´s essential that app2 on tomcat1 (and app2 on tomcat2) cannot see the Hazelcast cache of the other deployed apps.
The following image shows this situation:
Tomcat 1 Tomcat 2
+-----------------------------------+ +-----------------------------------+
| | | |
| app1.war app2.war | | app1.war app2.war |
| +----------+ +----------+ | | +----------+ +----------+ |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| +----+-----+ +----+-----+ | | +----+-----+ +-----+----+ |
| | | | | ^ ^ |
+-----------------------------------+ +-----------------------------------+
| | | |
| | | |
| | | |
| | | |
+--------------------------------------+ |
Shared cache via Hazelcast | |
| |
+---------------------------------------+
Shared cache via Hazelcast
Is this possible with Hazelcast? And if so, how?
Right now I only find solution talking about shared web sessions via Hazelcast. But this doesn´t seem to be a solution for me here, or am I wrong?
If your applications must be strictly isolated, then you probably need to use different cluster groups. Cluster groups make it possible for different clusters to coexist on the same network, while being completely unreachable to one another (assuming correct configuration).
If, however, you just need application data to be separate, then you can just make sure that app1 instances use caches with names that do not clash with app2 cache names. This is the simplest implementation.
If you are deploying a sort of multitenant environment where you have security boundaries between the two groups of applications, then going for the cluster group option is better as you can protect clusters with passwords, and applications will be using distinct ports to talk to one another in those groups.
Yes, this is possible.
You can configure the cache name.
Application app1 uses a cache named app1. Application app2 uses a cache named app2.
If you configure it correctly then they won't see each's others data.
If by "essential" that they can't you mean that you have a stronger requirement than preventing accidental mis-configuration, then you need to use role-based security.

How to run flyway:migrate on a cloned database

We have a production database and we maintain it by using flyway. Recently we cloned our production database to create a UAT database. The UAT database has the same schema and data as that in production. Now we try to run "mvn flyway:migrate" on the UAT database to test new flyway script. However, we got
+---------+-----------------------+---------------------+---------+
| Version | Description | Installed on | State |
+---------+-----------------------+---------------------+---------+
| 0.0.1 | script.1 | | <Baseln |
| 0.0.2 | script.2 | | <Baseln |
| 0.0.3 | script.3 | | <Baseln |
| 0.1.1 | script.4 | | <Baseln |
| 0.1.2 | script.5 | | <Baseln |
| 0.2.0 | script.6 | | <Baseln |
| 0.5.1 | script.7 | | <Baseln |
| 0.5.2 | script.8 | | <Baseln |
| 0.6.0 | script.9 | | <Baseln |
| 0.7.0 | script.10 | | <Baseln |
| 0.8.0 | script.11 | | <Baseln |
| 0.9.0 | script.12 | | <Baseln |
| 0.10.0 | script.13 | | <Baseln |
| 0.11.1 | script.14 | | <Baseln |
| 0.12.0 | script.15 | | <Baseln |
| 0.13.0 | script.16 | | <Baseln |
| 0.14.0 | script.17 | | <Baseln |
| 0.15.0 | script.18 | | <Baseln |
| 0.16.0 | script.19 | | <Baseln |
| 0.16.1 | script.20 | | <Baseln |
| 0.17.0 | script.21 | | <Baseln |
| 0.17.1 | script.22 | | <Baseln |
| 0.18.0 | script.23 | | <Baseln |
| 1 | << Flyway Baseline >> | 2016-11-07 08:11:33 | Baselin |
| 1.16.0 | script.19 | 2017-02-15 10:03:18 | Future |
| 1.16.1 | script.20 | 2017-02-15 10:03:18 | Future |
+---------+-----------------------+---------------------+---------+
The script.23 is a new script. We expect the state is pending.
However, the state of all scripts became Baseln. I searched the relating topics for a day but could not find scenarios closed to my case. Is there any configuration on flyway (maven) I can use to run migrate command on a cloned database? Please help. (My database is SQL Server 2014, flyway version 4.0 ,maven version is 3.5, JDK version 1.7)
Thanks a lot.
Chi-Fu
I think that all versions lower than the baseline are not executed - they are supposed to belong to the baseline.
If script.23 is a new migration it should have a version greater than the last version, typically 1.18.0 (According to standard flyway config, V1_18_0__script.23.sql rather than V0_18_0__script.23.sql).
After having renamed this file, do a repair before trying to migrate again.

How to launch an LXD container on another node and exchange ssh keys with the container?

How to launch and LXD container on another node and exchange ssh keys with the container?
That is, how to give Ansible direct access to the LXD container using SSH?
I am aware of the authorized_key module however this would only exchange keys between the host and Ansible and not Ansible and the LXD container.
Please see the below diagram which describes the machine layout:
+----------------------------+ +----------------------------+
| | | |
| Baremetal Machine <------------------+ Ansible Machine |
| + | | |
| | | | |
| | | | |
| | | | |
| +--------------------+ | | |
| | | | | | |
| | v | | | |
| | LXD Container | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| +--------------------+ | | |
| | | |
+----------------------------+ +----------------------------+
Start containers from images that support some sort of provisioning system.
Most common is cloud-init – it's already inside many official cloud images.
When you create such a container, just add required configuration settings via user.user-data config option and it will be automatically applied when container started.
lxd_container module support config parameter to set container configuration options.
You can find useful cloud config examples here.

Chef solo can't find cookbook during kitchen test run

I'm trying to add a Test Kitchen for a bunch of cookbooks that we use to provision a Jenkins CI instance.
We use Berkshelf to manage dependencies. The file structure is as follows:
| .gitignore
| .kitchen.yml
| Berksfile
| Berksfile.lock
| bootstrap.sh
| chefignore
| metadata.rb
| provision.sh
| readme.md
| solo.json
| solo.rb
| tree.txt
| VERSION
|
+---.kitchen
| | default-ubuntu-1404.yml
| |
| +---kitchen-vagrant
| | \---kitchen-chef-jenkins-default-ubuntu-1404
| | | Vagrantfile
| | |
| | \---.vagrant
| | \---machines
| | \---default
| | \---virtualbox
| | action_set_name
| | id
| | index_uuid
| | private_key
| | synced_folders
| |
| \---logs
| default-centos-71.log
| default-ubuntu-1404.log
| kitchen.log
|
+---site-cookbooks
| +---ant
| | | .gitignore
| | | .kitchen.yml
| | | Berksfile
| | | CHANGELOG.md
| | | chefignore
| | | CONTRIBUTING.md
| | | LICENSE
| | | metadata.rb
| | | README.md
| | | TESTING.md
| | | Thorfile
| | |
| | +---attributes
| | | default.rb
| | |
| | +---providers
| | | library.rb
| | |
| | +---recipes
| | | default.rb
| | | install_package.rb
| | | install_source.rb
| | |
| | +---resources
| | | library.rb
| | |
| | \---templates
| | \---default
| | ant_home.sh.erb
| |
| +---haxe_cookbook
| | | CHANGELOG.md
| | | metadata.rb
| | | README.md
| | |
| | \---recipes
| | default.rb
| |
| \---mbp-jenkins
| | Berksfile
| | Berksfile.lock
| | CHANGELOG.md
| | chefignore
| | metadata.rb
| | README.md
| |
| +---recipes
| | default.rb
| |
| +---resources
| | | commons-net-3.3.jar
| | |
| | +---css
| | | style.css
| | |
| | +---images
| | | logo-mbp.png
| | | web.png
| | |
| | \---js
| | scripts.js
| |
| \---templates
| +---default
| | | config.xml
| | |
| | \---secrets
| | hudson.console.AnnotatedLargeText.consoleAnnotator
| | hudson.model.Job.serverCookie
| | hudson.util.Secret
| | jenkins.security.ApiTokenProperty.seed
| | jenkins.slaves.JnlpSlaveAgentProtocol.secret
| | master.key
| | org.acegisecurity.ui.rememberme.TokenBasedRememberMeServices.mac
| | org.jenkinsci.main.modules.instance_identity.InstanceIdentity.KEY
| |
| \---emails
| build-complete.html.groovy
| build-started.html.groovy
|
\---test
\---integration
\---default
Executing:
kitchen converge default-ubuntu-1404
Results in the following error:
[2015-08-24T09:13:24+00:00] ERROR: Cookbook mbp-jenkins not found. If you're loading mbp-jenkins from another cookbook, make sure you configure the dependency in your metadata
[2015-08-24T09:13:24+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
Which suggests that chef-solo can't find the cookbook mbp-jenkins. I would expect it to find it though as we define the cookbook paths in the solo.rb file as follows:
root = File.absolute_path(File.dirname(__FILE__))
file_cache_path 'cache'
cookbook_path [root + "/cookbooks", root + "/site-cookbooks",root + "/berks-cookbooks"]
Not really sure what is going wrong here so any suggestions would be appreciated
Update:
I have tried using the chef zero provisioner, that however gives me the output:
================================================================================
Error Resolving Cookbooks for Run List:
================================================================================
Missing Cookbooks:
------------------
No such cookbook: mbp-jenkins
Expanded Run List:
------------------
* mbp-jenkins::default
Have you tried using the chef_zero provisioner instead? I suspect your problem is because chef solo does not run Berkshelf, which would explain the missing cookbooks.
For example see:
How to customise a tomcat recipe in Chef
Update
The issue appears to be that the cookbooks in the site-cookbooks directory is not be copied over to the target machine.
Seems to me the simplest and best fix is to include the local cookbooks in your Berksfile as follows:
source 'https://supermarket.chef.io'
cookbook 'ant', path: 'site-cookbooks/ant'
cookbook 'haxe_cookbook', path: 'site-cookbooks/haxe_cookbook'
cookbook 'mbp-jenkins', path: 'site-cookbooks/mbp-jenkins'
metadata

Resources