Running addon search --refresh returns only a single result:
roo> addon search --refresh
Successfully downloaded Roo add-on Data
1 found, sorted by rank; T = trusted developer; R = Roo 1.2 compatible
ID T R DESCRIPTION -------------------------------------------------------------
01 - Y 0.1.0.BUILD Hindi language support for Spring Roo Web MVC JSP
Scaffolding; #mvc,#localization,locale:hi
--------------------------------------------------------------------------------
I am using spring-roo version 1.2.2
roo> version
____ ____ ____
/ __ \/ __ \/ __ \
/ /_/ / / / / / / /
/ _, _/ /_/ / /_/ /
/_/ |_|\____/\____/ 1.2.2.RELEASE [rev 7d75659]
There have been a handful of problems with the addon repository in recent months - see here, for example. The (somewhat annoying) workaround is that if you can locate the URL for an addon you're interested in, you can still load it via the 'osgi obr url add' and 'osgi obr deploy' commands. You can see an example here.
Related
So far, I user Spring Boot 2.2 and use jib to build a docker image.
But now, Spring Boot 2.3 released and the
Release Notes says that Spring Boot 2.3 can build a Docker image with Paketo buildpack by default.
Spring Boot 2.3 enhances Docker support with new features
This article says that Spring Boot 2.3 will allow for more efficient Docker builds.
I tried to build a docker image with Spring Boot 2.3.
As bellow, Spring Boot 2.3 can build an image with some jvm options by default to optimize memory.
Container memory limit unset. Configuring JVM for 1G container.
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=83555K -XX:ReservedCodeCacheSize=240M -Xss1M -Xmx453020K (Head Room: 0%, Loaded Class Count: 12338, Thread Count: 250, Total Memory: 1.0G)
Adding 127 container CA certificates to JVM truststore
Spring Cloud Bindings Boot Auto-Configuration Enabled
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.3.1.RELEASE)
My Question:
Is A docker image built by Spring Boot 2.3 better than the one by jib?
Not better, but different. Jib can build the image without docker installation.
Spring docker build unpack the jar (slightly better on startup) and put dependencies into a layer. When you build new version, it can reuse those layers (if dependencies not changed), so it just create one layer with your app (which size is way less then the dependencies' size). This cause that the build will be faster. But you have to install local docker.
maybe somebody can help me.
I, was testing karaf in my local windows laptop an everything work fine. But now I'm trying to install it on Centos Linux server, but Im not getting the same results.
My steps:
1-Donwload Karaf from http://www.apache.org/dyn/closer.lua/karaf/4.2.8/apache-karaf-4.2.8.tar.gz
2- Unzipped in the centos linux path: /usr/local/apache-karaf-4.2.8
3- Check the java version installed in the server
JAVA_HOME=/usr/local/jdk1.8.0_251
PATH=.../usr/local/jdk1.8.0_251/jre/bin..
4- run sudo ./bin/karaf and then
karaf: JAVA_HOME not set; results may vary
__ __ ____
/ //_/____ __________ _/ __/
/ ,< / __ / ___/ __/ /_
/ /| |/ // / / / // / __/
// ||__,// __,//
Apache Karaf (4.2.8)
Anyway the shell become active, but I tried to add features and I receved this error ( It doesn't seem to be related to the previous message)
karaf#root()> feature:repo-add camel
Adding feature url mvn:org.apache.camel.karaf/apache-camel/RELEASE/xml/features
Error executing command: Error resolving artifact org.apache.camel.karaf:apache-camel:xml:features:RELEASE: [Failed to resolve version for org.apache.camel.karaf:apache-camel:xml:features:RELEASE: Could not find metadata org.apache.camel.karaf:apache-camel/maven-metadata.xml in local (/root/.m2/repository)] : mvn:org.apache.camel.karaf/apache-camel/RELEASE/xml/features
I did the same on my PC and everything worked perfectly. Im missing something specific for Linux ? Any Idea?
Thanks in advance
Karaf uses etc/org.ops4j.pax.url.mvn.cfg to list all the m2 repos from where he can downloads your bundles/features.
By default, the value for this distribution is :
org.ops4j.pax.url.mvn.repositories= \
https://repo1.maven.org/maven2#id=central, \
https://repository.apache.org/content/groups/snapshots-group#id=apache#snapshots#noreleases, \
https://oss.sonatype.org/content/repositories/ops4j-snapshots#id=ops4j.sonatype.snapshots.deploy#snapshots#noreleases
However, in your case, it seems it tries to get it from /root/.m2/repository. Could you check your etc/org.ops4j.pax.url.mvn.cfg and see if there is no modification to it ?
I was looking for logging frameworks available in OpenDaylight Controller.
Something similar to ELK stack.
I found apache decanter as a possible way to do this.
https://karaf.apache.org/manual/decanter/latest-1/
The problem is that it works fine with normal karaf shell but doesnt work so with the ODL karaf shell of Oxygen SR4 release.
As per the documentation,
https://karaf.apache.org/download.html#decanter-installation
feature:repo-add decanter
feature:install decanter-appender-elasticsearch
feature:install decanter-collector-log
feature:install decanter-collector-jmx
I tried the same in ODL karaf shell.
I downloaded the Oxygen-SR4 binary and started the karaf shell.
./karaf clean Apache Karaf starting up. Press Enter to open the shell now... 100% [========================================================================]
Karaf started in 0s. Bundle stats: 13 active, 13 total
________ ________ .__ .__ .__ __
\_____ \ ______ ____ ____ \______ \ _____ ___.__.| | |__| ____ | |___/ |_
/ | \\____ \_/ __ \ / \ | | \\__ \< | || | | |/ ___\| | \ __\
/ | \ |_> > ___/| | \| ` \/ __ \\___ || |_| / /_/ > Y \ |
\_______ / __/ \___ >___| /_______ (____ / ____||____/__\___ /|___| /__|
\/|__| \/ \/ \/ \/\/ /_____/ \/
Hit '<tab>' for a list of available commands and '[cmd] --help' for help on a specific command. Hit '<ctrl-d>' or type 'system:shutdown' or 'logout' to shutdown OpenDaylight.
opendaylight-user#root>system:version
4.1.6
opendaylight-user#root>feature:repo-add decanter Adding feature url
opendaylight-user#root>feature:install decanter-appender-elasticsearch
org.apache.karaf.features.core[org.apache.karaf.features.internal.service.FeaturesServiceImpl] : null
But the same thing works with plain apache karaf shell.
./karaf
__ __ ____
/ //_/____ __________ _/ __/
/ ,< / __ `/ ___/ __ `/ /_
/ /| |/ /_/ / / / /_/ / __/
/_/ |_|\__,_/_/ \__,_/_/
Apache Karaf (4.2.5)
Hit '<tab>' for a list of available commands and '[cmd] --help' for help on a specific command. Hit '<ctrl-d>' or type 'system:shutdown' or 'logout' to shutdown Karaf.
karaf#root()> feature:repo-add decanter Adding feature url mvn:org.apache.karaf.decanter/apache-karaf-decanter/RELEASE/xml/features karaf#root()> feature:install decanter-appender-elasticsearch karaf#root()>
Can anyone point out what is missing here because I feel the shell versions are similar?
Can you also suggest some other logging frameworks to process Karaf logs and data in OpenDaylight Controller(Oxygen SR4) something similar to ELK stack.
we use decanter in upstream OpenDaylight system testing. the features we
install (using the featuresBoot variable in etc/org.apache.karaf.features.cfg are:
odl-jolokia,decanter-collector-jmx,decanter-appender-elasticsearch
but, we also configure the featuresRepositories to have:
mvn:org.apache.karaf.decanter/apache-karaf-decanter/1.0.0/xml/features
here is a wiki page with some extra info.
here is an example of us grabbing data to find Mem Usage and we also
install elasticsearch which lets us see it as a graph over time
Hope it helps.
I am new to Angular CLI. My background is in C# ASP.NET. I recently spun up a ASP.NET Core app with Angular CLI through the project wizard of Visual Studio. All that I needed to do to get things working on http://localhost:4200 was to run npm install #angular/cli
and then
ng serve
and I was good to go.
To get the latest stable release of Angular, I then ran
ng update
Now, ng serve returns the error "The serve command requires to be run in an Angular project, but a project definition could not be found."
Also, one other observation is that the $schema reference to schema.json from the .angular-cli.json file is no longer available in the path "./node_modules/#angular/cli/lib/config/schema.json". The config folder is completely missing and I've read that it the config folder was eliminated in a previous version.
My question, is how do i change my .angular-cli.json or other project files to conform to the new standard?
Below is a screenshot of my current versions.
C:\Users\andrew.kirby\source\repos\SchoolsList\SchoolsList\ClientApp>ng --version
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ △ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
Angular CLI: 7.3.8
Node: 10.9.0
OS: win32 x64
Angular: 7.2.14
... animations, common, compiler, compiler-cli, core, forms
... http, language-service, platform-browser
... platform-browser-dynamic, platform-server, router
Package Version
------------------------------------------------------
#angular-devkit/architect 0.13.8
#angular-devkit/core 7.3.8
#angular-devkit/schematics 7.3.8
#angular/cli 7.3.8
#schematics/angular 7.3.8
#schematics/update 0.13.8
rxjs 6.5.1
typescript 3.2.4
I have Spark 1.6.2 and Spark 2.0 installed on my hortonworks cluster.
Both these versions are installed on a node in the Hadoop Cluster of 5 nodes.
Each time I start the spark-shell I get:
$ spark-shell
Multiple versions of Spark are installed but SPARK_MAJOR_VERSION is not set
Spark1 will be picked by default
When I check the version I get:
scala> sc.version
res0: String = 1.6.2
How can I start the other version(spark-shell of Spark2.0)?
export SPARK_MAJOR_VERSION=2
You just need to give the major version 2 or 1.
$ export SPARK_MAJOR_VERSION=2
$ spark-submit --version
SPARK_MAJOR_VERSION is set to 2, using Spark2
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.0.0.2.5.0.0-1245
Working this approach:
spark-shell
loads Spark 1.6
whilst typing
spark2-shell
loads Spark 2.0
$ SPARK_MAJOR_VERSION=2 spark-shell
use spark2-submit, pyspark2 or spark2-shell
if you are using windows 8 or 10 change the environment variables for spark_home for spark2 version or spark3 version whichever you want to use and change the path variable too. and close terminal and restart it
and launch the sparkshell you will be able to see your default version