CQ - how to know an Osgi Config factory PID in advance - osgi

Adobe's docs say:
When making a Factory Configuration append -<identifier> to the name.
As in: org.apache.sling.commons.log.LogManager.factory.config-<identifier>
Where is replaced by free text that you (must) enter to identify the instance (you cannot omit this information); for example:
org.apache.sling.commons.log.LogManager.factory.config-MINE
This implies the "free text" is an identifier, not just a name. I was hoping it would be the service PID.
I'm setting up an instance of the JDBC Connections Pool. I've got an xml config file in my /jcr_root/apps/<my-app>/config directory named "com.day.commons.datasource.jdbcpool.JdbcPoolService-mypid.xml". Installing the package containing xml file creates the expected, correctly-named sling:OsgiConfig node. This, in turn, does create a configured instance of the service, but the PID is com.day.commons.datasource.jdbcpool.JdbcPoolService.<random-guid>.
Is there some way to know what the PID will be in advance so that it can be referenced?

Currently there is no way to know this id in advance. I aleady asked on the OSGi dev list if this can be enhanced. It would be nice to identify factory configs in a human readable way. Unfortunately the response was that it is not necessary or similar. Maybe if some more people ask for it :-)

Related

add a RPC password to your bitcoin.conf file

I'm following instructions here and it says that I will find a bitcoin.conf file Windows: %APPDATA%\Bitcoin\ and
To use bitcoind and bitcoin-cli, you will need to add a RPC password to your bitcoin.conf file. Both programs will read from the same file if both run on the same system as the same user, so any long random password will work: rpcpassword=change_this_to_a_long_random_password
However When I navigate to %APPDATA%\Bitcoin\ I don't see a bitcoin.conf file.
So What do I do? Do I add a bitcoin.conf file? There is a bitcoin-conf.md file in doc in my bitcoin install directory so maybe somthing todo with that? I really don't know. Thanks for pointing me in the right direction.
That guide does not take into account the fact that you do not have to add any RPC user or password to your configuration file for the past some years.
Bitcoind will generate a cookie that allows the CLI (command line interface) to communicate with the Bitcoin daemon using RPC without the user having had to give it a single thought.
That is a developers' guide, so developers may have more complex requirements that are solved if they specify their own RPC authentication settings, such as running multiple wallets, or possibly exchange software that communicates with the wallet or multiple wallets.
bitcoin.conf being optional, it is not by default created, and is not needed for ordinary usage, only becoming necessary when the user or developer has particular, non-default settings to set.
The possible settings can be found by the help command bitcoind -help and lists a number of command line parameters (beginning with a dash or hyphen) that can be typed or pasted after bitcoind on the command line, but can be put in a text file named bitcoin.conf without the minus sign before the command. For example: -connect=IPAddress becomes simply connect=IPAddress in the conf file.
For creating suitable rpcauth (username and hashed password), and rpcuser and rpcpassword values, I've found some resources such as https://github.com/jlopp/bitcoin-core-rpc-auth-generator
Rather than serving JLopp's RPC auth generator locally you can simply copy from, or use a Python script found in the Bitcoin repository under the folder named "share", you will see a folder called rpcauth which contains the rpcauth.py script and a small explanatory file called README.md.
You have to create this file and put a single line rpcpassword=<your_password> in it.
bitcoin-conf.md contains documentation for this btcoin.conf and particularily states:
The configuration file is not automatically created; you can create it using your favorite text editor.
Recommended reading this doc. It may help you to facilitate running your node.

Spring batch, where/how to save the metadata about jobs

How do I set any external database (mysql, postgres I'm not concerned with which one at this point) for usage with metadata?
At the moment I have spring batch writing the results of jobs to Mongodb and that works fine but I'm not keeping track of job status so the jobs are being run from the start every time even if interrupted halfway though.
There are plenty examples of how to avoid doing this, but can't seem to find a clear answer on what I need to configure to send the metadata somewhere real rather than in-memory.
I attempted adding a properties file but that had no effect
# for Postgres:
batch.jdbc.driver=org.postgresql.Driver
batch.jdbc.url=jdbc:postgresql://localhost/postgres
batch.jdbc.user=postgres
batch.jdbc.password=mysecretpassword
batch.database.incrementer.class=org.springframework.jdbc.support.incrementer.PostgreSQLSequenceMaxValueIncrementer
batch.schema.script=classpath:/org/springframework/batch/core/schema-postgresql.sql
batch.drop.script=classpath:/org/springframework/batch/core/schema-drop-postgresql.sql
batch.jdbc.testWhileIdle=false
batch.jdbc.validationQuery=
There are plenty examples of how to avoid doing this, but can't seem to find a clear answer on what I need to configure to send the metadata somewhere real rather than in-memory.
You need to configure a bean of type DataSource in your batch application context (or extend the DefaultBatchConfigurer and set the data source you want to use to store meta-data).
There are many samples here: https://github.com/spring-projects/spring-batch/tree/master/spring-batch-samples
You can find the data source configuration here: https://github.com/spring-projects/spring-batch/blob/master/spring-batch-samples/src/main/resources/data-source-context.xml

Mesos's resource information when executing a docker image

I'm working on Mesos code, and become very confused about the resource needed for executing a docker image.
In, src/cli/execute.cpp: CommandScheduler::offers(), it pulls out the resource from the task, and uses this resource information to check whether to accept or decline the offer.
However in CommandScheduler, I don't see anywhere the task's resource is updated.
And in the main() function, where a CommandScheduler object is create, I only see a docker-image-string used to create the task-info, still no explicit compute resource usage information.
I need this resource information (code level) explicitly. Could anyone help me to understand this point?
I'm working on Mesos 1.2 right now.
Thanks
I got it. By default, the resource allocated is cpus:1;mem:128.
It's done by flag default value for resources
add(&Flags::resources,
"resources",
"Resources for the command.",
"cpus:1;mem:128");

EC2 init.d script - what's the best practice

I'm creating an init.d script that will run a couple of tasks when the instance starts up.
it will create a new volume with our code repository and mount it if it doesn't exist already.
it will tag the instance
The tasks above being complete will be crucial for our site (i.e. without the code repository mounted the site won't work). How can I make sure that the server doesn't end up being publicly visible? Should I start my init.d script with de-registering the instance from the ELB (I'm not even sure if it will be registered at that point), and then register it again when all the tasks finished successfully?
What is the best practice?
Thanks!
You should have a health check on your ELB. So your server shouldn't get in unless it reports as happy. And it shouldn't report happy if the boot script errors out.
(Also, you should look into using cloud-init. That way you can change the boot script without making a new AMI.)
I suggest you use CloudFormation instead. You can bring up a full stack of your system by representing it in a JSON format template.
For example, you can create an autoscale group that has an instances with unique tags and the instances have another volume attached (which presumably has your code)
Here's a sample JSON template attaching an EBS volume to an instance:
https://s3.amazonaws.com/cloudformation-templates-us-east-1/EC2WithEBSSample.template
And here many other JSON templates that you can use for your guidance and deploy your specific Stack and Application.
http://aws.amazon.com/cloudformation/aws-cloudformation-templates/
Of course you can accomplish the same using init.d script or using the rc.local file in your instance but I believe CloudFormation is a cleaner solution from the outside (not inside your instance)
You can also write your own script that brings up your stack from the outside by why reinvent the wheel.
Hope this helps.

The proposed key is not within the partition defined by owning publisher:Apache JUDDI and OSB

I am trying to publish Oracle Service Bus proxy services to UDDI registry (JUDDI).
And I am getting $subject when try to publish a proxy service through OSB. Have anyone came across with such before?
Exception is as follows when try to publish a proxy named "foobar"
[2013-05-14 12:53:16,871] INFO {org.apache.cxf.phase.PhaseInterceptorChain} - Application {urn:uddi-org:v3_service}UDDIPublicationService#{urn:uddi-org:v3_service}save_service has thrown exception, unwinding now: org.apache.juddi.v3.error.KeyUnavailableException: The proposed key is not within the partition defined by owning publisher: uddi:bea.com:servicebus:default:foobar
Yes, I definitely have. See this blog post for details
http://apachejuddi.blogspot.com/2013/03/uddi-howto-create-tmodels-with-custom.html
Basically, you need to create a key generator for anything other than a key starting with the default one (which is something like uddi:org.apache.juddi:something)
To more directly answer you, create a tModel partition key generator with the following keys, then retry your operation again.
uddi:bea.com:keygenerator
uddi:bea.com:servicebus:keygenerator
uddi:bea.com:servicebus:default:keygenerator
These are the rules defined the specification.

Resources