DBPedia Live mirror setup on Mac OS X - macos

I am trying to set up a DBpedia Live Mirror on my personal Mac machine. Here is some technical host information about my setup:
Operating System: OS X 10.9.3
Processor 2.6 GHz Intel Core i7
Memory 16 GB 1600 MHz DDR3
Database server used for hosting data for the DBpedia Live Mirror: OpenLink Virtuoso (Open-source edition)
Here's a summary of the steps I followed so far:
Downloaded the initial data seed from DBPedia Live as: dbpedia_2013_07_18.nt.bz2
Downloaded the synchronization tool from http://sourceforge.net/projects/dbpintegrator/files/.
Executed the virtload.sh script. Had to tweak some commands in here to be compatible with OS X.
Adapted the synchronization tools configuration files according to the README.txt file as follows:
a) Set the start date in file "lastDownloadDate.dat" to the date of that dump (2013-07-18-00-000000).
b) Set the configuration information in file "dbpedia_updates_downloader.ini", such as login credentials for Virtuoso, and GraphURI.
Executed "java -jar dbpintegrator-1.1.jar" on the command line.
This script repeatedly showed the following error:
INFO - Options file read successfully
INFO - File : http://live.dbpedia.org/changesets/lastPublishedFile.txt has been successfully downloaded
INFO - File : http://live.dbpedia.org/changesets/2014/06/16/13/000001.removed.nt.gz has been successfully downloaded
WARN - File /Users/shruti/virtuoso/dbpedia-live/UpdatesDownloadFolder/000001.removed.nt.gz cannot be decompressed due to Unexpected end of ZLIB input stream
ERROR - Error: (No such file or directory)
INFO - File : http://live.dbpedia.org/changesets/2014/06/16/13/000001.added.nt.gz has been successfully downloaded
WARN - File /Users/shruti/virtuoso/dbpedia-live/UpdatesDownloadFolder/000001.added.nt.gz cannot be decompressed due to Unexpected end of ZLIB input stream
ERROR - Error: (No such file or directory)
INFO - File : http://live.dbpedia.org/changesets/lastPublishedFile.txt has been successfully downloaded
INFO - File : http://live.dbpedia.org/changesets/2014/06/16/13/000002.removed.nt.gz has been successfully downloaded
INFO - File : /Users/shruti/virtuoso/dbpedia-live/UpdatesDownloadFolder/000002.removed.nt.gz decompressed successfully to /Users/shruti/virtuoso/dbpedia-live/UpdatesDownloadFolder/000002.removed.nt
WARN - null Function executeStatement
WARN - null Function executeStatement
WARN - null Function executeStatement
WARN - null Function executeStatement
WARN - null Function executeStatement
...
Questions
Why do I repeatedly see the following error when running the Java program: "dbpintegrator-1.1.jar"? Does this mean that the triples from these files were not updated in my live mirror?
WARN - File /Users/shruti/virtuoso/dbpedia-live/UpdatesDownloadFolder/000001.removed.nt.gz cannot be decompressed due to Unexpected end of ZLIB input stream
ERROR - Error: (No such file or directory)
How can I verify that the data loaded in my mirror is up to date? Is there a SPARQL query I can use to validate this?
I see that the data in my live mirror is missing wikiPageId (http://dbpedia.org/ontology/wikiPageID) and wikiPageRevisionID. Why is that? Is this data missing from the DBpedia live data dumps?

It should be fixed now.
Can you try again from here: https://github.com/dbpedia/dbpedia-live-mirror

Related

Reg: database is not starting up an error

getting below error while starting the database:-
startup
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/mis/PARAMETERFILE/spfile.276.967375255'
ORA-17503: ksfdopn:10 Failed to open file +DATA/mis/PARAMETERFILE/spfile.276.967375255
ORA-04031: unable to allocate 56 bytes of shared memory ("shared pool","unknown object","KKSSP^24","kglseshtSegs")
Your database cannot find the SPFILE (newer init.ora) within ASM with the actual system parameters or has no permissions to access it.
Either your Grid Infrastructure stack or the dbs/spfile.ora is pointing to the wrong file.
To find out what the grid infrastructure stack is using, run "srvctl" which should display the parameterfile name the database should be using
srvctl config database -d <dbname>
...
Spfile: +DATA/<dbname>/PARAMETERFILE/spfile.269.1066152225
...
Then check (as the grid user), if the file indeed is not visible (by using asmcmd):
asmcmd
ASMCMD> ls +DATA/<dbname>/PARAMETERFILE/
spfile.269.1066152225
If the name is different, then you got the issue... (and you have to point to the correct file).
If the name is correct, then it could be wrong permissions on the oracle executable(s) (check My Oracle Support):
RAC Database Can't Start: ORA-01565, ORA-17503: ksfdopn:10 Failed to open file +DATA/BPBL/spfileBPBL.ora (Doc ID 2316088.1)

Stanford CoreNLP - Unknown variable WORKDAY

I am processing some documents and I am getting many WORKDAY messages as seen below.
There's a similar issue posted here for WEEKDAY. Does anyone know how to deal with this message. I am running corenlp in a Java server on Windows and accessing it using Juypyter Notebook and Python code.
[pool-2-thread-2] INFO edu.stanford.nlp.ling.tokensregex.types.Expressions - Unknown variable: WORKDAY
[pool-2-thread-2] INFO edu.stanford.nlp.ling.tokensregex.types.Expressions - Unknown variable: WORKDAY
[pool-2-thread-2] INFO edu.stanford.nlp.ling.tokensregex.types.Expressions - Unknown variable: WORKDAY
[pool-1-thread-7] WARN CoreNLP - java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error making document
This is an error in the current SUTime rules file (and it's actually been there for quite a few versions). If you want to fix it immediately, you can do the following. Or we'll fix it in the next release. These are Unix commands, but the same thing will work elsewhere except for how you refer to and create folders.
Find this line in sutime/english.sutime.txt and delete it. Save the file.
{ (/workday|work day|business hours/) => WORKDAY }
Then move the file to the right location for replacing in the jar file, and then replace it in the jar file. In the root directory of the CoreNLP distribution do the following (assuming you don't already have an edu file/folder in that directory):
mkdir -p edu/stanford/nlp/models/sutime
cp sutime/english.sutime.txt edu/stanford/nlp/models/sutime
jar -uf stanford-corenlp-4.2.0-models.jar edu/stanford/nlp/models/sutime/english.sutime.txt
rm -rf edu

IBM MQ8.0.0.7 Queue Manager Startup issue

Hi We are in the process of Migrationg MQ8.0.0.7 with Linux OS. We have created QueueManager. Also Created switch load file with oraClient 11.2.0.4.Updated switch file config in qm.ini file.
Switch file created usiing -- IBM MQ8.0.0.7 and oraClient 11.2.0.4
But When we try to start up the queue manager we are getting below error
04/10/2018 08:15:07 AM - Process(32092.1) User(mqm) Program(amqzxma0) Host(lswttsccsap5u) Installation(Installation1) VRMF(8.0.0.7) QMgr(NYCOLI2_QM.UATIN)
AMQ6175: The system could not dynamically load the shared library '/var/mqm/exits64/oraswit'. The system returned error message '/var/mqm/exits64/oraswit: undefined symbol: xaosw'.
EXPLANATION: This message applies to UNIX systems. The shared library '/var/mqm/exits64/oraswit' failed to load correctly due to a problem with the library. ACTION: Check the file access permissions and that the file has not been corrupted.
----- amqxufnx.c : 1436 -------------------------------------------------------
04/10/2018 08:15:07 AM - Process(32092.1) User(mqm) Program(amqzxma0) Host(lswttsccsap5u) Installation(Installation1) VRMF(8.0.0.7) QMgr(NYCOLI2_QM.UATIN)
AMQ7622: WebSphere MQ could not load the XA switch load file for resource manager 'Oracle_CMXIUAT_AIX'.
qm.ini file Stanza
XAResourceManager:
Name=MyQueuManager
SwitchFile=oraswit
ThreadOfControl=THREAD
XAOpenString=Oracle_XA+Acc=P/myusername/mypassword+SesTm=100+dbgfl=15+LogDir=/var/mqm/xa_logs+dbgfl=15+SqlNet=SSS+threads=TRUE
Could you please advise?
The SwitchFile doesn't seem correct. Where did you get that from?
I found these for Linux:
Linux (nonthreaded) libmqmxa64.so libmqcxa64.so
Linux (threaded) libmqmxa64_r.so libmqcxa64_r.so
Check out the documentation for setting up XA:
https://www.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.sce.doc/q023610_.htm

core dump generation using Websphere liberty is failing

I am trying to generate javadump and core dump for the liberty process, using:
/opt/IBM/wlp/bin/server javadump --include=system
It is failing giving an error:
Server default dump complete in /opt/ibm/wlp/usr/servers//The core file created by child process with pid = 22000 was not found. Expected to find core file with name "/var/support/core_kernel-command-.22000"
My core_pattern(/proc/sys/kernel/core_pattern) is /var/support/core_%e.%p
I am using java 8.

HPCC/HDFS Connector

Does anyone know about HPCC/HDFS connector.we are using both HPCC and HADOOP.There is one utility(HPCC/HDFS connector) developed by HPCC which allows HPCC cluster to acess HDFS data
i have installed the connector but when i run the program to acess data from hdfs it gives error as libhdfs.so.0 doesn't exist.
I tried to build libhdfs.so using command
ant compile-libhdfs -Dlibhdfs=1
its giving me error as
target "compile-libhdfs" does not exist in the project "hadoop"
i used one more command
ant compile-c++-libhdfs -Dlibhdfs=1
its giving error as
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
[get] Error getting http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
to /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
BUILD FAILED java.net.ConnectException: Connection timed out
any suggestion will be a great help
Chhaya, you might not need to build libhdfs.so, depending on how you installed hadoop, you might already have it.
Check in HADOOP_LOCATION/c++/Linux-<arch>/lib/libhdfs.so, where HADOOP_LOCATION is your hadoop install location, and arch is the machine’s architecture (i386-32 or amd64-64).
Once you locate the lib, make sure the H2H connector is configured correctly (see page 4 here).
It's just a matter of updating the HADOOP_LOCATION var in the config file:
/opt/HPCCSystems/hdfsconnector.conf
good luck.

Resources