When I import a project into chainbuilder and re-harvest, there are still modules that are marked as "not available". The project does not run properly and some visualizations are not shown. Even the visualizations that are loading don't show the proper content.
When ChainBuilder exports a project it only exports the modules, their connections and settings, but not the data. Each module is identified by a unique key. While harvesting after importing a project, ChainBuilder searches the current database of services and matches services with the same key to the modules that are imported. If there are modules that dont match any service, the module will still be marked as not availabe. In this case you need to find the required services (with the matching key) and add them to the system and re-harvest again.
Since the data is not transfered automatically during the import, the workflows that provide appropriate data have to be rerun. There should be some starting points in the workflow that will reload all the data and restore the necessary datasets into the workflows and therefore also into the visualizations. See if you can find a button, textfield or dropdown-box that starts that process. Sometimes more than one button/input field will have to be triggered.
Related
I want to add 3 extension to nifi (nifi-encryptMD5-nar-1.0.nar-unpacked,nifi-getOperator-nar-1.0-SNAPSHOT.nar-unpacked,nifi-splitAttributeValue-nar-1.0.nar-unpacked)
I added the extensions folder in the directory /opt/nifi/nifi-1.9.2/work/nar/extensions/
then when I restart the nifi service, nifi turns off and does not turn on, when I force the start with the user nifi, nifi turns on but the extentions have been deleted from the directory /opt/nifi/nifi-1.9.2/work/nar/extensions/
you have to put *.nar packages into nifi/lib directory.
nifi will extract it automatically on startup into nifi/work folder.
As daggett says, you need to use the .nar files, not any unpacked directories.
In your nifi.properties there will be two or more properties that provide locations for NiFi libraries:
nifi.nar.library.directory=./lib
nifi.nar.library.autoload.directory=./extensions
nifi.nar.library.directory.<something>=./<yourdir>
The first is the default and contains all the basic NiFi files. It is only checked on startup and any valid nars found are unpacked in the work directory and loaded. Generally you don't want to add anything here except in test environments as it complicates upgrades.
The second is empty by default but it is scanned every 30 seconds for new .nars. These will be unpacked and loaded if possible, but only for new libraries. Already loaded libraries will not be reloaded.
This is a good location to add your validated custom libraries without having to restart NiFi.
The third and further need to be added manually to the properties file. These are loaded on startup only and useful if you have a lot of custom processors and want to keep them organized.
In your situation I'd put the .nars in the extensions folder and check the logs to see if they were loaded successfully. You'll then need a full refresh of the browser window (Shift+F5 I think) before they show up in the list of processors.
In a cluster setup, add the .nars on all nodes and verify their availability before trying to add them to the canvas or things might get messy.
I'm quite new to android and I am currently working on a app which should utilize a Room database. Following the documentation a room database can be created through the following lines:
myDatabase = Room.databaseBuilder(appContext, MyDatabase.class, "MyDB")
.build();
Now where did room create the database file?
It can't be found in my project folder.
The documentation doesn't mention anything about it and -generally speaking- barely gives any information about how this thing works.
Where is the database?
Does DatabaseBuilder.build() manage, to open the existing database created from previous app launches?
The list of questions is long.
Any information about the .build() thing aswell as further information about Room (misconceptions etc.) are very appreciated, for the documentation doesn't really make things clear for me.
Thank you!
Now where did room create the database file?
The database (a file) will be placed at the default location on the actual device which is data/data/<the_package_name>/database/MyDB.
In your case, as you have coded :-
myDatabase = Room.databaseBuilder(appContext, MyDatabase.class, "MyDB")
.build();
Then the database files will be: -
data/data/<your_package_name>/databases/MyDB
data/data/<your_package_name>/databases/MyDB-wal
data/data/<your_package_name>/databases/MyDB-shm
It can't be found in my project folder.
The database file is not part of the project, it is a file that is created and maintained on the actual device on which the App has been installed.
However, you can use Database Inspector (now App Inspection) on Android Studio to view the database e.g. :-
You can also view the files, if whatever device you test on allows access, by using Device File Explorer. e.g.
Does DatabaseBuilder.build() manage, to open the existing database created from previous app launches?
Yes, if the file exists then it is opened otherwise the file is created. If you uninstall the App this effectively delete's the file. The whole idea of a database is that it persists.
The build() undertakes various tasks, primarily seeing if the underlying file exists and then opening the file. In doing so it
extracts the version number that is stored in the file and compares the number against the number coded within the App (via the #Database).
If the version number from the App is greater then an attempt is tried to find a Migration (recently AutoMigration's have been added to Room).
compares the expected schema (according to the entities defined as part of the #Database), against what is found in the file.
A mismatch will result in the app crashing, so fixes would have to be made.
Note references to file is a simplification, by default Room uses a loggin mode called WAL (Write-Ahead Logging). In WAL mode there will be an additional 2 files that the SQLite routines maintain (you don't need to do anything):-
the database file name suffixed with -wal is the primary wal file into which changes are written (they are applied to the main database automatically).
the database file name suffixed with -shm (this is a WAL file for the WAL file).
I saw Neo4j can run as Impermanent DB for unit testing porpouses, I'm not sure if this fits my needs. I have my data stored in neo4j the usual way (persistent) but, starts from my data, I want to let each user start an "experimental session": the users add/delete nodes and relationships, but NOT in permanent way, just experimenting with the data (after that session the edits should be lost). The edits shouldn't be saved and obiouvsly they shouldn't be visibile to the others. What's the best way to accomplish that?
Using impermanent database should work. You would
need to import the data to each new database
spring-data-neo4j is not able to connect to multiple databases (in current release), you would need to start multiple instances of your application, e.g. in a tomcat container
when your application stops (or crashes) you would obviously lose data
Or you could potentially use only 1 database with the base data being public (= visible to everyone) and then for all new nodes/relationships you can add owner property.
When querying the data you would check the property is either public or the current user.
At the end of the session you would just delete all nodes and relationships with given owner.
If you also want to edit existing data then it gets more complicated, you could create a copy of the node/relationship and somehow handle that, or if it's not too large copy whole dataset.
You can build a docker image from the neo4j base image (or build your own) and copy your graph.db into it.
Then you can have every user start a docker container from said image.
If that doesn't answer your question, more info is needed.
I'm following the steps from the Adobe instructions on How to Build AEM Projects using Maven and I'm not seeing how to populate or configure the meta data for the contents.
I can edit and configure the actual files, but when I push the zip file to the CQ instance, the installed component has a jcr:primaryType of nt:folder and the item I'm trying to duplicate has a jcr:primaryType of cq:Component (as well as many other properties). So is there a way to populate that data without needing to manual interact with CQ?
I'm very new to AEM, so it's entirely possible I've overlooked something very simple.
Yes, this is possible to configure JCR node types without manually changing with CQ.
Make sure you have .content.xml file in component folder and it contains correct jcr:primaryType ( e.g. jcr:primaryType="cq:Component").
This file contains metadata for mapping JCR node on File System.
For beginners it may be useful take a look VLT, that used for import/export JCR on File System. Actually component's files in your project should be similar to VLT component export result from JCR.
On the project I am working on, there are some proxy items that were added at some point from source location A to location B. However right now is not possible to check the source of the proxy and the proxy folder in B does not show anything that suggests that it's a proxy, besides the visual cue that it's grayed out.
When I analysed this article, I looked into the web.config and found this:
<proxiesEnabled>false</proxiesEnabled>
<publishVirtualItems>true</publishVirtualItems>
This seems to suggest that when the proxies were published they were published as regular items, losing any connection to their source, so since I want to recreate the proxies, due to some weird issues related to layout on the standard values item on the template not propagating correctly to the proxied items, I wanted to try to rename the old proxy folder and create a new one, however when I wanted to rename I got a modal alert with this message:
"This item occurs in other locations. If you rename it, the item will be renamed in the other locations as well. Are you sure you want to rename 'MyFoo'?"
Does this means the item still is attached to the source?
I am using Sitecore 6.2.0 (rev. 100701)
I suppose that the settings you mentioned are for master database. Now if you take a closer look at the article you reference, it lists 2 valid cases of proxies setup:
when web database also relies on proxies
when web database contains regular items only which came from publishing
These both cases assume that master database has proxiesEnabled='true'. Look, it doesn't have any sense otherwise - if proxies are disabled, the rest of the mechanism doesn't work, there are no virtual items.
And I can see proxiesEnabled='false' in the example you mentioned.
I'm not sure about the message you get. But if I need to change the proxy definition, I would do the following:
make sure proxiesEnabled='false' for web database (I guess this is your intention)
disable proxies for master database and change the proxies definition the way you want
set publishVirtualItems to true for master database
turn the proxies on for master database
make sure virtual items are in place and publish the site
Try this on some test environment and experiment to get the behavior you'd like - playing with the live site is a bad karma :)