I'm using ServiceMix and I was trying to list some bundles and retrieve only there bundle ID.
I'm trying to do the following:
osgi:list | grep -i | awk xxx
I tried to use awk but that's not provided in ServiceMix.
I also tried to use shell:exec like this:
osgi:list | grep -i | shell:exec awk 'xxx'
But that doesn't work either, maybe my approach is completely wrong.
Does anybody else have some experience how I could achieve my goal of only retrieving the id's of a bundle?
You can always make use of Karaf's shell language (Works as of karaf 2.3.1)
bundles = $.context bundles ;
echo "Printing bundle information" ;
each ($bundles) {
symbolicName = $it symbolicName ;
bundleId = (($it bundleid) toString) ;
echo "Symbolic name : " $symbolicName " Bundle Id : " $bundleId ;
}
When ran this will output something similiar to :
Symbolic name : org.apache.felix.framework Bundle Id : 0
Symbolic name : org.ops4j.pax.url.mvn Bundle Id : 1
Symbolic name : org.ops4j.pax.url.wrap Bundle Id : 2
Symbolic name : org.ops4j.pax.logging.pax-logging-service Bundle Id : 3
I dont think there is a sed/awk command. And the list command cannot only just show the bundle id.
You can log a JIRA ticket for an enhancement. Maybe for options to the list command to select what to list only (bundle id, bundle version, state etc.)
http://karaf.apache.org/index/community/support.html
Related
I want to find the custom headers for a particular site.
I am able to view the raw attributes using the below command. I think this is global value for all sites. I have created some headers for some sites. But I am not able to view it. Is there a command to view for a particular site.
>Get-IISConfigSection -SectionPath system.webServer/httpProtocol | Get-IISConfigCollection -CollectionName "customHeaders"
Attributes : {name, value}
ChildElements : {}
ElementTagName : add
IsLocallyStored : True
Methods :
RawAttributes : {[name, X-Powered-By], [value, ASP.NET]}
Schema : Microsoft.Web.Administration.ConfigurationElementSchema
UPDATE
I was able to find using the below command.
> Get-IISConfigSection -SectionPath system.webServer/httpProtocol -CommitPath "testsite" | Get-IISConfigCollection -CollectionName "customHeaders"
But I am not able to extract the name and value.
I used select-object rawattribute, but it gives empty {}
You can use the Get-Website command to get the configuration information of a specific website:
Get-Website -Name "Default Web Site"
This command will return the configuration information of the default website.
For a gitlab ci/cd project, I need to find the url of a knative service (used to deploy a webservice) so that I can utilize it as my base url for load testing
I have found that I can find the url (and other information) with the command: kubectl get ksvc helloworld-go, which outputs:
NAME URL LATESTCREATED LATESTREADY READY REASON
helloworld-go http://helloworld-go.default.34.83.80.117.xip.io helloworld-go-96dtk helloworld-go-96dtk True
Can someone please provide me an easy way to extract only the url in a sh script? I believe the easiest way might be to find the text between the first and second space on the second line.
kubectl get ksvc helloworld-go | grep -oP "http://[^\t]*"
or
kubectl get ksvc helloworld-go | grep -Eo "http://[^[:space:]]*"
I'm looking for a possibility to export all failed jobs (resolved and not) of a day into a file (text, csv, xml,..)
Tendency is, I will not be able to check all resolved/forced-ok jobs which failed all throughout the day unless I do it manually by placing in a spreadsheet.
Does anybody know if there is such an utility? We're currently using Control-M in Version 7.0 on Server
You can schedule a job do so :-
run below script as command line with passing two argument , %%PARM1 %%PARM2
you need to update two filed in it :-
1. NDP time of your Environment , I have used as 0930
2. Control-M environment name
3. your email ID in last line of mailx and file path as per your system .
***you can use mutt -a if mailx -a is not working in your system for sending email with attached file .
----------------------------------
now job :-
Job Type : Command
File Path : not reuired
C ommand : path/report.sh %%PARM1 %%PARM2
rest all as normal ,
but don't forget to define PARM1 and PRAM2 in auto edit variable
PARM1 = %%$PREV
PARM2 = %%$DATE
-------------------------
Script
***********************************
report.sh
------------------------------------------------
#!/bin/bash
env=< Control-M user name > # Use control-M name
ctmlog list $1 0930 $2 0930 | grep NOTOK > $1_failedjob.txt # update time to NDP time ,i used 0930
cut -d'|' -f2,3,4,5,8 $1_failedjob.txt | sed 's/|/,/g' > $1_failed.csv
awk 'BEGIN {print "DATE,TIME,JOBNAME\t,ORDERID\t,STATUS";}
{print $0;}
END { print "\tReport generated\n";}' $1_failed.csv
rm $1_failedjob.txt
echo " Last 24 Hour failed job list " | mailx -s "Failed Job list for $1" -a "absolute path of file $1_failed.csv" youremail#domain.com
exit 0`
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
Apart from using this , you can always ask your ops team to send an report by exporting failed job for a particular time and date from Control-m EM GUI
I am trying to integrate Hazelcast in Tomcat cartridge (https://github.com/worldline/openshift-cartridge-tomcat). The problem is to retrieve ip:ports of gears of a scalable application. I looked at how vert.x does it and it is perfectly fine. It uses pub/sub mechanist to set ip and port in the future environment variable. I can see this in hook folder "set-vertex-cluster" file.
echo $list > $OPENSHIFT_VERTX_DIR/env/OPENSHIFT_VERTX_HAZELCAST_CLUSTER
I did id the same way replacing VERTX by TOMCAT (short name of cart).
But after creating the app there is no OPENSHIFT_TOMCAT_HAZELCAST_CLUSTER env variable.
I looked at how JbossEAP does it. It has
touch ${OPENSHIFT_JBOSSEAP_DIR}/env/OPENSHIFT_JBOSSEAP_CLUSTER
https://github.com/openshift/origin-server/blob/master/cartridges/openshift-origin-cartridge-jbosseap/bin/install
It worked for me, and finally I see OPENSHIFT_VERTX_HAZELCAST_CLUSTER env var and it is populated by gear_ip:gear_port.It's good. But when I scale the app I get the error:
Activation of new gears failed: 53d2ed31e0b8cd2bba00051f: Error activating gear: CLIENT_ERROR: Failed to execute: 'control start' for /var/lib/openshift/53d2ed31e0b8cd2bba00051f/tomcat
Unable to complete the requested operation due to: An invalid exit code (1) was returned from the server ex-std-node92.prod.rhcloud.com. This indicates an unexpected problem during the execution of your request.
Reference ID: 0b4e8a465d1901e8317a18739586e6d1
OPENSHIFT_VERTX_HAZELCAST_CLUSTER is populated by gear1_ip:gear1_port and gear2_ip:gear2_port but of course the second gear failed to start.
When I remove
touch ${OPENSHIFT_TOMCAT_DIR}/env/OPENSHIFT_TOMCAT_HAZELCAST_CLUSTER
from bin/install file everything is fine! Except that I don't have list of cluster members...
I am going mad, struggling with the problem all day long. Can anybody help me, please?
UPDATED:
• Customer A create an OpenShift Application A1 with the Git downloadable cartridge
• OpenShift install the downloadable cartridge into Node N1 and install it into the Application A1.
• Customer A now want to scale the application A1.
• OpenShift try to scale the application A1 by acquiring a new gear in Node N2 (notice that it is different from N1 above) and copy the content from A1 into N2 (but somehow will not copy all the environment variables and necessary settings). In .env folder of every gear
• The gear creation now fail on N2, because the downable cartridge content is not available on N2, because of following commands in bin/tomcat
# Filter user-owned configuration files through sed to replace all
# ${OPENSHIFT_*} variables with their actual values, and write the
# resulting filtered files to the live Tomcat configuration location.
sed_replace_env=$(print_sed_exp_replace_env_var)
replacement_conf_files=(
"server.xml"
"context.xml"
)
for conf_file in "${replacement_conf_files[#]}"; do
sed ${sed_replace_env} ${OPENSHIFT_REPO_DIR}/.openshift/config/${conf_file} > ${OPENSHIFT_TOMCAT_DIR}/conf/${conf_file}
done
The particular function
function print_sed_exp_replace_env_var {
sed_exp=""
for openshift_var in $(env | grep OPENSHIFT_ | awk -F '=' '{print $1}')
do
# environment variable values that contain " or / need to be escaped
# or they will cause problems in the sed command line.
variable_val=$(echo "${!openshift_var}" | sed -e "s#\/#\\\\/#g" | sed -e "s/\"/\\\\\"/g")
# the entire sed s/search/replace/g command needs to be quoted in case the variable value
# contains a space.
sed_exp="${sed_exp} -e \"s/\\\${env.${openshift_var}}/${variable_val}/g\""
done
printf "%s\n" "$sed_exp"
}
As you can see, there is a dangerous of the sed command that may replace entire file into blank file for server.xml and context.xml if the environment variables are not present.
The correct order that OpenShift should perform is:
Customer A create an OpenShift Application A1 with the Git downloadable cartridge
• OpenShift install the downloadable cartridge into Node N1 and install it into the Application A1.
• Customer A now want to scale the application A1.
• OpenShift try to scale the application A1 by acquiring a new gear in Node N2 (notice that it is different from N1 above). Copy all necessary environment variables into the new gears as well
• There is a script within your cartridge that require the server.xml and context.xml template from downloadable cartridge, it can now be successfully found and copy.
I need a command to get only the active bundles in osgi / karaf,
I know that scr:list / osgi:list will list all the bundles irrespective of state.
Is there any other easy way to check all the bundles are active in karaf?
Regards,
Harry
How about:
la | grep -i active
where la is a shortcut for osgi:list with list all bundles including system.
Use
la | grep '| Active |'
Since this will avoid any false positives due to a bundle which has 'active' in its name.