In phantom-dsl version 1.12.2, what is the mechanism to close/shutdown/cleanup resources after finishing talking with Cassandra when using the RootConnector way of connecting to a Cassandra cluster?
This is an example:
object Whatever extends DatabaseProvider {
private[this] def shutdownCassandra(): Unit = {
com.websudos.phantom.Manager.shutdown()
database.session.close()
database.session.getCluster.close()
}
}
To understand what DatabaseProvider is, have look here.
Update
As of phantom 1.15.0, there is a shutdown method available by default on any Database object.
Related
by default i assume that spring-boot/camel is using org.apache.camel.support.processor.DefaultExchangeFormatter.
I wonder how I can set the flag 'showHeaders' inside a spring-boot app.
Because I hope to see the headers in the "org.apache.camel.tracing" log as well.
Wish all a wonderful day
DefaultTracer is used in Camel to trace routes by default.
It is created with showHeaders(false) formatter option set.
Therefore you could implement another Tracer (consider extending from DefaultTracer) to enable putting headers into traced messages.
i need this mostly in my tests. so i have built this into my basic test class
#BeforeEach
public void before() {
if( camelContext.getTracer().getExchangeFormatter() instanceof DefaultExchangeFormatter ) {
DefaultExchangeFormatter def = (DefaultExchangeFormatter)camelContext.getTracer().getExchangeFormatter();
def.setShowHeaders(true);
}
}
Is there a way in Parse Platform to fallback to local data store if there is no connection ?
I understand that there is pin/pinInBackground, so I can pin any object to the LocalDataStore.
Then I can query the localdatastore to get that info.
However, I want always to try to get first the server data, and if it fails, get the local data.
Is there a way to do this automatically?
(or I have to pin everything locally, then query remote and if it fails, then query locally)
Great question.
Parse has the concept of cached queries. https://docs.parseplatform.org/ios/guide/#caching-queries
The interesting feature of cached queries is that you can specify - "if no network". However this only works if you have previously cached the query results. I've also found that the delay between losing network connectivity and the cached query recognising that its lost network makes the whole capability a bit rubbish.
How I have resolved this issue is using a combination of the AlamoFire library and pinning objects. The reason I chose to use the AlamoFire library is because it's extremely well supported and it spots drops in network connectivity near immediately. I only have a few hundred records so I'm not worrying about pinning objects and definitely performance does not seem to be affected. So how I work this is....
Define some class objects at the top of the class
// Network management
private var reachability: NetworkReachabilityManager!
private var hasInternet: Bool = false
Call a method as the view awakes
// View lifecycle
override func awakeFromNib() {
super.awakeFromNib()
self.monitorReachability()
}
Update object when network availability changes. I know this method could be improved.
private func monitorReachability() {
NetworkReachabilityManager.default?.startListening { status in
if "\(status)" == "notReachable" {
self.hasInternet = false
} else {
self.hasInternet = true
}
print("hasInternet = \(self.hasInternet)")
}
}
Then when I call a query I have a switch as I set up the query object.
// Start setup of query
let query = PFQuery(className: "mySecretClass")
if self.hasInternet == false {
query.fromLocalDatastore()
}
// Complete rest of query configuration
Of course I pin all the results I ever return from the server.
Hi I am new to gemfire and i want to exprire data from gemfire region for specific key after idle time which I set.
i did this using redis by below code.
jedis.set(key, value);
config.setMaxIdle(50);
jedis.expire(key, config.getMaxIdle());
but how to do it in gemfire.
Any help.
Thanks.
You can control the expiration of individual keys if you configure the region to use a custom expiration. You provide an implementation of the CustomExpiry interface that can look at each entry and decide when it should expire. For example:
RegionFactory regionFactory = ...
regionFactory.setCustomEntryIdleTimeout(new CustomExpiry() {
public ExpirationAttributes getExpiry(Entry entry) {
if(entry.getKey().equals("XXX")) {
return new ExpirationAttributes(50, ExpirationAction.INVALIDATE);
}
}
});
If you want to delete data for specific region then try following code :
Region<String, String> region = cache .<String, String> createClientRegionFactory( ClientRegionShortcut.CACHING_PROXY) .setEntryTimeToLive(new ExpirationAttributes(50)) .create(PropertiesCache.getInstance().getProperty("region"));
It's works in my situation.
After reading about the remote shell in the Spring Boot documentation I started playing around with it. I implemented a new Command that produces a Stream of one of my database entities called company.
This works fine. So I want to output my stream of companies in the console. This is done by calling toString() by default. While this seams reasonable there is also a way to get nicer results by using a Renderer.
Implementing one should be straight forward as I can delegate most of the work to one of the already existing ones. I use MapRenderer.
class CompanyRenderer extends Renderer<Company> {
private final mapRenderer = new MapRenderer()
#Override Class<Company> getType() { Company }
#Override LineRenderer renderer(Iterator<Company> stream) {
def list = []
stream.forEachRemaining({
list.add([id: it.id, name: it.name])
})
return mapRenderer.renderer(list.iterator())
}
}
As you can see I just take some fields from my entity put them into a Mapand then delegate to a instance of MapRenderer to do the real work.
TL;DR
Only problem is: How do I register my Renderer with CRaSH?
Links
Spring Boot documentation http://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-remote-shell.html
CRaSH documentation (not helping) http://www.crashub.org/1.3/reference.html#_renderers
I'm getting an error when trying to run the EF 4.3.1 add-migrations command:
"The model backing the ... context has changed since the database was created".
Here's one sequence that gets the error (although I've tried probably a dozen variants which also all fail)...
1) Start with a database that was created by EF Code First (ie, already contains a _MigrationHistory table with only the InitialCreate row).
2) The app's code data model and database are in-sync at this point (the database was created by CF when the app was started).
3) Because I have four DBContexts in my "Services" project, I didn't run 'enable-migrations' command (it doesn't handle multipe contexts). Instead, I manually created the Migrations folder in the Services project and the Configuration.cs file (included at end of this post). [I think I read this in a post somewhere]
4) With the database not yet changed, and the app stopped, I use the VS EDM editor to make a trivial change to my data model (add one property to an existing entity), and have it generate the new classes (but not modify the database, obviously). I then rebuild the solution and all looks OK (but don't delete the database or restart the app, of course).
5) I run the following PMC command (where "App" is the name of one of the classes in Configuration.cs):
PM> add-migration App_AddTrivial -conf App -project Services -startup Services -verbose
... which fails with the "The model ... has changed. Consider using Code First Migrations..." error.
What am I doing wrong? And does anyone else see the irony in the tool telling me to use what I'm already trying to use ;-)
What are the correct steps for setting-up a solution starting with a database that was created by EF CF? I've seen posts saying to run an initial migration with -ignorechanges, but I've tried that and it doesn't help. Actually, I've spent all DAY testing various permutations, and nothing works!
I must be doing something really stupid, but I don't know what!
Thanks,
DadCat
Configuration.cs:
namespace mynamespace
{
internal sealed class App : DbMigrationsConfiguration
{
public App()
{
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "Services.App.Repository.Migrations";
}
protected override void Seed(.Services.App.Repository.ModelContainer context)
{
}
}
internal sealed class Catalog : DbMigrationsConfiguration<Services.Catalog.Repository.ModelContainer>
{
public Catalog()
{
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "Services.Catalog.Repository.Migrations";
}
protected override void Seed(Services.Catalog.Repository.ModelContainer context)
{
}
}
internal sealed class Portfolio : DbMigrationsConfiguration<Services.PortfolioManagement.Repository.ModelContainer>
{
public Portfolio()
{
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "Services.PortfolioManagement.Repository.Migrations";
}
protected override void Seed(Services.PortfolioManagement.Repository.ModelContainer context)
{
}
}
internal sealed class Scheduler : DbMigrationsConfiguration<.Services.Scheduler.Repository.ModelContainer>
{
public Scheduler()
{
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "Services.Scheduler.Repository.Migrations";
}
protected override void Seed(Services.Scheduler.Repository.ModelContainer context)
{
}
}
}
When using EF Migrations you should have one data context per database. I know that it can grow really large, but by trying to split it you will run into several problems. One is the migration issue that you are experiencing. Later on you will probably be facing problems when trying to make queries joining tables from the different contexts. Don't go that way, it's against how EF is designed.