Error on multi-tenant hybris backoofice start-up - multi-tenant

I have setup a slave tenant on a Hybris 1811 installation, but I cannot get the backoffice to work for the slave tenant (foo). The error that I get in browser is: Server Error.
I have followed the instructions from here: How to access Backoffice in Junit Tenant, but I cannot get it to work.
tenant_foo.properties
db.tableprefix=foo_
cronjob.timertask.loadonstartup=false
tenant.restart.on.connection.error=false
db.factory=de.hybris.platform.jdbcwrapper.JUnitDataSourceFactory
db.url=jdbc:oracle:thin:#localhost:1521:foo
db.driver=oracle.jdbc.OracleDriver
db.username=foo
db.password=bar
hac.webroot=/hac_foo
local_tenant_foo.properties
backoffice.webroot=/backoffice_foo
I have checked the Hybris logs and found this error:
ERROR [localhost-startStop-3] (foo) [ContextLoader] Context initialization failed
org.springframework.beans.factory.support.BeanDefinitionValidationException: java.io.IOException: Unable to remove a module library: E:\hybris-1811\data\backoffice\widgetlib\deployed\voucherbackoffice.jar; nested exception is com.hybris.cockpitng.core.CockpitApplicationException: java.io.IOException: Unable to remove a module library: E:\hybris-1811\data\backoffice\widgetlib\deployed\voucherbackoffice.jar
at com.hybris.backoffice.BackofficeApplicationContext.prepareRefresh(BackofficeApplicationContext.java:106) ~[classes/:?]
HAC works fine for both tenants (master and foo), but backoffice only works for the master tenant. Also, if I navigate to HAC->tenants-> foo -> view -> configured extension, I can see that for extensions acceleratorservices and admincockpit, under the WebContext column it displays "Missing configuration for this context in current tenant".

try to add backoffice library home for each tenant:
backoffice.library.home=${data.home}/foo
(foo is the tenant id). There is also some documentation about it in the help here.
I hope it helps!

Related

Running my revel application on windows 10 fail

I had problem when run my revel app on windows
it create fine but don't run when I try so only get this. any idea?
C:\Desarrollo\Web\webpro>revel run -a webpro
Revel executing: run a Revel application
WARN 05:53:33 harness.go:175: No http.addr specified in the app.conf listening on localhost interface only. This will not allow external access to your application
Changed detected, recompiling
Parsing packages, (may require download if not cached)... Completed
ERROR 05:53:38 build.go:406: Build errors errors="C:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\memcached.go:11:2: no required module provides package github.com/bradfitz/gomemcache/memcache; to add it:\n\tgo get github.com/bradfitz/gomemcache/memcache\nC:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\redis.go:10:2: no required module provides package github.com/garyburd/redigo/redis; to add it:\n\tgo get github.com/garyburd/redigo/redis\nC:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\inmemory.go:12:2: no required module provides package github.com/patrickmn/go-cache; to add it:\n\tgo get github.com/patrickmn/go-cache\n"
C:\Users\Mario\go\src\webpro\C:\Users\Mario\go\pkg\mod\github.com\revel\revel#v1.0.0\cache\memcached.go:11
WARN 05:53:38 build.go:420: Could not find in GO path file=C:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\memcached.go:11
ERROR 05:53:38 harness.go:239: Build detected an error error="Go Compilation Error (in C:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\memcached.go:11:2): no required module provides package github.com/bradfitz/gomemcache/memcache; to add it:"
Error compiling code, to view error details see proxy running on http://:9000
Time to recompile 5.3684655s
I am newer ok
Best
Check your IPv4 address with the ipconfig command
Open webpro/conf/app.conf and paste the IPv4 address into the http.addr parameter

Spinnaker & Okta integration failing

Scenerio:
Upgraded Spinnaker to 1.12.0. No other config changes that would impact this integration (we had to modify an s3 IAM because it quit working). Okta integration stopped working. Public key was reissued during install process for the ingress, may be relevant?
SAML-TRACE shows payload getting to okta and back
Spinnaker throws two different errors depending on browser and how I get there.
Direct link to deck url: (500) No IDP was configured, please update included metadata with at least one IDP (seen in browser and gate)
Okta "chicklet" in okta dashboard: (401) Authentication Failed: Incoming SAML message is invalid
Config details (again none of this changed):
Downloading metadata directly
JKS is being leveraged and is valid
service url is confirmed
alias for JKS is confirmed
I had this issue as well when upgrading from 1.10.13 to 1.12.2. I found lots of these error messages in Gate's logs:
2019-02-19 05:31:30.421 ERROR 1 --- [.0-8084-exec-10] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw e
xception [org.opensaml.saml2.metadata.provider.MetadataProviderException: No IDP was configured, please update included metadata with at least one IDP] with root cause
org.opensaml.saml2.metadata.provider.MetadataProviderException: No IDP was configured, please update included metadata with at least one IDP
at org.springframework.security.saml.metadata.MetadataManager.getDefaultIDP(MetadataManager.java:795) ~[spring-security-saml2-core-1.0.2.RELEASE.jar:1.0.2.RELEASE]
at org.springframework.security.saml.context.SAMLContextProviderImpl.populatePeerEntityId(SAMLContextProviderImpl.java:157) ~[spring-security-saml2-core-1.0.2.RELEASE.jar
:1.0.2.RELEASE]
at org.springframework.security.saml.context.SAMLContextProviderImpl.getLocalAndPeerEntity(SAMLContextProviderImpl.java:127) ~[spring-security-saml2-core-1.0.2.RELEASE.ja
r:1.0.2.RELEASE]
at org.springframework.security.saml.SAMLEntryPoint.commence(SAMLEntryPoint.java:146) ~[spring-security-saml2-core-1.0.2.RELEASE.jar:1.0.2.RELEASE]
at org.springframework.security.web.access.ExceptionTranslationFilter.sendStartAuthentication(ExceptionTranslationFilter.java:203) ~[spring-security-web-4.2.9.RELEASE.jar
:4.2.9.RELEASE]
...
After downgrading back to 1.10.13, I upgraded to the next version, 1.11.0, and found that's when the issue started. Eventually, I looked at Gate's logs from the launch of the Container and found:
2019-02-20 22:31:40.132 ERROR 1 --- [0.0-8084-exec-3] o.o.s.m.provider.HTTPMetadataProvider : Error retrieving metadata from https://000000000000.okta.com/app/00000000000000000/sso/saml/metadata
javax.net.ssl.SSLException: Error in hostname verification
at org.opensaml.ws.soap.client.http.TLSProtocolSocketFactory.verifyHostname(TLSProtocolSocketFactory.java:241) ~[openws-1.5.4.jar:na]
at org.opensaml.ws.soap.client.http.TLSProtocolSocketFactory.createSocket(TLSProtocolSocketFactory.java:186) ~[openws-1.5.4.jar:na]
at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707) ~[commons-httpclient-3.1.jar:na]
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387) ~[commons-httpclient-3.1.jar:na]
...
This lead me to realize that the TLS Certificate was being rejected by Gate. Not sure why it suddenly started failing the check. Up to this point, I had it configured as:
$ hal config security authn saml edit --metadata https://000000000000.okta.com/app/00000000000000000/sso/saml/metadata
I ended up downloading the metadata file and redeploying with halyard.
$ wget https://000000000000.okta.com/app/00000000000000000/sso/saml/metadata
$ hal config security authn saml edit --metadata "${PWD}/metadata"
$ hal config version edit --version 1.12.2
$ hal deploy apply
Opened up a private browser window as suggested by the Spinnaker documentation and Gate started redirecting to Okta correctly again.
Issue filed, https://github.com/spinnaker/spinnaker/issues/4017.
So I ended up finding the answer. The tomcat config changed apparently in spinnaker in later versions for gate.
I created this snippet in ~/.hal/default/profiles/gate-local.yml
server:
tomcat:
protocolHeader: X-Forwarded-Proto
remoteIpHeader: X-Forwarded-For
internalProxies: .*
Deployed spinnaker and it was back to working.

SonarQube Install - startsonar.bat fails

SonarQube 6.0 Installation
No plugins currently installed
Presently working through what appears to be connect string problem
In a command prompt, running as Administrator, I enter
StartSonar
This fails with rather cryptic output to the Command Prompt window. The following is what I believe is the relevant excerpt from the sonar.log file:
2016.08.11 12:23:29 INFO web[o.sonar.db.Database] Create JDBC data source for jdbc:sqlserver://localhost/DevOps;databaseName=sonar
2016.08.11 12:23:43 ERROR web[o.a.c.c.C.[.[.[/sonar]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.PlatformServletContextListener
java.lang.IllegalStateException: Can not connect to database. Please check connectivity and settings (see the properties prefixed by 'sonar.jdbc.').
at org.sonar.db.DefaultDatabase.checkConnection(DefaultDatabase.java:104) ~[sonar-db-6.0.jar:na]
at org.sonar.db.DefaultDatabase.start(DefaultDatabase.java:71) ~[sonar-db-6.0.jar:na]
Relevant excerpt from .config file:
sonar.jdbc.url=jdbc:sqlserver://localhost/DevOps:1433;databaseName=sonar
sonar.jdbc.username=SonarQube
sonar.jdbc.password=PassWord1234
Finally, victory is mine! To be fair, I looked at the .config file from TeamCity. Thanks JetBrains!
The correct entries from the .config file (Note backslash / escape characters!):
sonar.jdbc.url=jdbc\:sqlserver\://localhost\\DevOps\:1433;databaseName\=Sonar
sonar.jdbc.username=SonarQube
sonar.jdbc.password=PassWord1234

Need help in deploying warn using WSAdmin install with JNDI

I am trying to deploy a web application using WSAdmin tool. But it is throwing an error.
JACl script that I am using is :
$AdminApp install /opt/www/temp/SampleApp.war {-nopreCompileJSPs -nodeployejb -server delivery -cell delivery_cell -node delivery_node -appname SampleApp -contextroot SampleApp -MapWebModToVH {{"SampleApp" SampleApp.war,WEB-INF/web.xml default_host}}}
Error I am getting is:
com.ibm.ws.scripting.ScriptingException: WASX7109E: Insufficient data for install task "MapResRefToEJB
ADMA0007E: A validation error occurred in task Mapping resource references to resources. The Java Naming and Directory Interface (JNDI) name is not specified for resource reference jdbc/app_DB in module SampleApp with EJB name.
From the error above I understand that I need to configure my JNDI with -MapResRefToEJB. I tried to understand this option but getting too confused.
Can anyone help me to resolve this issue?
These errors appear to be caused by the MapResRefToEJB option in
the wsadmin command not being set correctly, or the resource it is pointing to
not existing correctly in the web.xml file.
Additional information on MapResRefToEJB
Options for the AdminApp object install, installInteractive, edit,
editInteractive, update, and updateInteractive commands
http://pic.dhe.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.nd.doc/info/ae/ae/rxml_taskoptions.html
Thank you
Note : Opinions are my own.

HPCC/HDFS Connector

Does anyone know about HPCC/HDFS connector.we are using both HPCC and HADOOP.There is one utility(HPCC/HDFS connector) developed by HPCC which allows HPCC cluster to acess HDFS data
i have installed the connector but when i run the program to acess data from hdfs it gives error as libhdfs.so.0 doesn't exist.
I tried to build libhdfs.so using command
ant compile-libhdfs -Dlibhdfs=1
its giving me error as
target "compile-libhdfs" does not exist in the project "hadoop"
i used one more command
ant compile-c++-libhdfs -Dlibhdfs=1
its giving error as
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
[get] Error getting http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
to /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
BUILD FAILED java.net.ConnectException: Connection timed out
any suggestion will be a great help
Chhaya, you might not need to build libhdfs.so, depending on how you installed hadoop, you might already have it.
Check in HADOOP_LOCATION/c++/Linux-<arch>/lib/libhdfs.so, where HADOOP_LOCATION is your hadoop install location, and arch is the machine’s architecture (i386-32 or amd64-64).
Once you locate the lib, make sure the H2H connector is configured correctly (see page 4 here).
It's just a matter of updating the HADOOP_LOCATION var in the config file:
/opt/HPCCSystems/hdfsconnector.conf
good luck.

Resources