I'm setting up a demo with a database that contains around 150 million triples and wanted to confirm with you what repo settings I should change to maximize the performance of read queries. The only two things I have changed in the attached template so far were:
owlim:entity-index-size "155000000" ;
owlim:entity-id-size "32" ;
Any recommendations for updates, as well?
Please find the config template below.
Thanks.
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>.
#prefix rep: <http://www.openrdf.org/config/repository#>.
#prefix sr: <http://www.openrdf.org/config/repository/sail#>.
#prefix sail: <http://www.openrdf.org/config/sail#>.
#prefix owlim: <http://www.ontotext.com/trree/owlim#>.
[] a rep:Repository ;
rep:repositoryID "Northwind" ;
rdfs:label "Northwind sample database" ;
rep:repositoryImpl [
rep:repositoryType "graphdb:FreeSailRepository" ;
sr:sailImpl [
sail:sailType "graphdb:FreeSail" ;
owlim:owlim-license "" ;
owlim:base-URL "http://example.org/graphdb#" ;
owlim:defaultNS "" ;
owlim:entity-index-size "155000000" ;
owlim:entity-id-size "32" ;
owlim:imports "" ;
owlim:repository-type "file-repository" ;
owlim:ruleset "owl-horst-optimized" ;
owlim:storage-folder "storage" ;
owlim:enable-context-index "false" ;
owlim:cache-memory "80m" ;
owlim:tuple-index-memory "80m" ;
owlim:enablePredicateList "false" ;
owlim:predicate-memory "0%" ;
owlim:fts-memory "0%" ;
owlim:ftsIndexPolicy "never" ;
owlim:ftsLiteralsOnly "true" ;
owlim:in-memory-literal-properties "false" ;
owlim:enable-literal-index "true" ;
owlim:index-compression-ratio "-1" ;
owlim:check-for-inconsistencies "false" ;
owlim:disable-sameAs "false" ;
owlim:enable-optimization "true" ;
owlim:transaction-mode "safe" ;
owlim:transaction-isolation "true" ;
owlim:query-timeout "1800" ;
owlim:query-limit-results "0" ;
owlim:throw-QueryEvaluationException-on-timeout "false" ;
owlim:useShutdownHooks "true" ;
owlim:read-only "false" ;
owlim:nonInterpretablePredicates "http://www.w3.org/2000/01/rdf-schema#label;http://www.w3.org/1999/02/22-rdf-syntax-ns#type;http://www.ontotext.com/owlim/ces#gazetteerConfig;http://www.ontotext.com/owlim/ces#metadataConfig" ;
]
].
Related
Right now, I am working on a "Text Editor" made with Bash. Everything was going perfectly until I tested it. When I opened the file the script created, everything was jumbled up. I eventually figured out it had something to do with the cat BASHTE/* >> $file I had put in. I still have no idea why this happens. My crappy original code is below:
#!/bin/bash
# ripoff vim
clear
echo "###############################################################################"
echo "# BASHTE TEXT EDITOR - \\\ = interupt :q = quit :w = write #"
echo "# :wq = Write and quit :q! = quit and discard :dd = Delete Previous line #"
echo "###############################################################################"
echo ""
read -p "Enter file name: " file
touch .$file
mkdir BASHTE
clear
echo "###############################################################################"
echo "# BASHTE TEXT EDITOR - \\\ = interupt :q = quit :w = write #"
echo "# :wq = Write and quit :q! = quit and discard :dd = Delete Previous line #"
echo "###############################################################################"
while true
do
read -p "$lines >" store
if [ "$store" = "\\:q" ]
then
break
elif [ "$store" = "\\:w" ]
then
cat BASHTE/* >> $file
elif [ "$store" = "\\:wq" ]
then
cat BASHTE/* >> $file
rm -rf .$file
break
elif [ "$store" = "\\:q!" ]
then
rm -rf BASHTE
rm -rf $file
break
elif [ "$store" = "\\:dd" ]
then
LinesMinusOne=$(expr $lines - 1)
rm -rf BASHTE/$LinesMinusOne.txt
else
echo $store >> BASHTE/$lines.txt
# counts the number of times the while loop is run
((lines++))
fi
done
This is what I got after I typed in the alphabet:
b
j
k
l
m
n
o
p
q
r
s
c
t
u
v
w
x
y
z
d
e
f
g
h
I
This was what I inputted
a
v
c
d
e
f
g
h
I
j
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
\\:wq
Any help would be great, Thanks
BASHTE/* is in lexical order, so every line starting with 1 will come before every line starting with 2 and so on. That means the order of your input lines is:
1 a
10 j
11 k
12 l
...and so on...
To make the lines sort well with the * operator, you'll need to name them with leading zeros, for example:
# ...
echo $store >> BASHTE/$(printf %020d $lines).txt
# ...
I chose the %020d format because it should store any number of lines applicable for a 64-bit system, since 2 ** 64 = 18446744073709551616, which is 20 digits long.
I want to run determined executable depending on a variable of my script cur_num
I have this code, where the paths are "random", meaning they have nothing to do with the secuential cur_num:
run() {
if [ $cur_num = "0" ]
then ~/pathDirName0/execFile0
elif [ $cur_num = "1" ]
then ~/pathDirName1/execFile1
elif [ $cur_num = "2" ]
then ~/pathDirName2/execFile2
elif [ $cur_num = "3" ]
then ~/pathDirName3/execFile3
elif [ $cur_num = "4" ]
then ~/pathDirName4/execFile4
fi
}
In case there are lots more of cases, that resulys in a very long if - elif statement.
As there are no enums in bash to build the cur_num- path relation, is there a cleaner way to obtain the desired path dynamically instead of with lots of ifs?
Try case
case $cur_num in
0) ~/pathDirName0/execFile0;;
1) ~/pathDirName1/execFile1;;
2) ~/pathDirName2/execFile2;;
...
esac
set allows you to create arrays in a portable manner, so you can use that to create an ordered array:
set -- ~/pathDirName0/execFile0 ~/pathDirName1/execFile1 ~/pathDirName2/execFile2 ~/pathDirName3/execFile3 ~/pathDirName4/execFile4
Then to access these items via an index you do $x where x is the index of the item.
Now your code would look something like this:
run() {
original="$#"
: $((cur_num+=1)) # Positional param indexing starts from 1
set -- ~/pathDirName0/execFile0 ~/pathDirName1/execFile1 \
~/pathDirName2/execFile2 ~/pathDirName3/execFile3 \
~/pathDirName4/execFile4
eval "$"$cur_num"" # cur_num = 1, accesses $1 == execFile0
set -- $original # Restore original args
}
A method that is both less and more hacky that would work for indexes above 9 and not mess with the original positional parameters nor use eval
run() {
count=0
for item in "$#"; do
[ "$count" = "$cur_num" ] && {
"$item"
return
}
: "$((count+=1))"
done
echo "No item found at index '$cur_num'"
}
cur_num="$1" # Assuming first arg of script.
run ~/pathDirName0/execFile0 ~/pathDirName1/execFile1 \
~/pathDirName2/execFile2 ~/pathDirName3/execFile3 \
~/pathDirName4/execFile4
I am using https://shacl.org/playground/
I have the following Shape Graph:
#prefix hr: <http://learningsparql.com/ns/humanResources#> .
#prefix d: <http://learningsparql.com/ns/data#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
#prefix sh: <http://www.w3.org/ns/shacl#> .
hr:ClassShape
a sh:NodeShape ;
sh:targetSubjectsOf rdf:type;
sh:or (
[
sh:path rdf:type ;
sh:nodeKind sh:IRI ;
sh:hasValue rdfs:Class;
]
[
sh:path rdf:type ;
sh:nodeKind sh:IRI ;
sh:hasValue rdf:Property;
]
);
sh:closed true ;
.
I have the following Data Graph
#prefix hr: <http://learningsparql.com/ns/humanResources#> .
#prefix d: <http://learningsparql.com/ns/data#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
#prefix sh: <http://www.w3.org/ns/shacl#> .
#### Regular RDFS modeling ####
hr:Employee a rdfs:Class .
hr:Another a rdfs:Class .
hr:name
rdf:type rdf:Property ; .
hr:hireDate
rdf:type rdf:Property ; .
hr:jobGrade
rdf:type rdf:Property ; .
I want to verify that every node which declares a rdf:type has a value of either rdfs:Class or rdf:Property.
I am getting the following validation errors:
[
a sh:ValidationResult ;
sh:resultSeverity sh:Violation ;
sh:sourceConstraintComponent sh:ClosedConstraintComponent ;
sh:sourceShape hr:ClassShape ;
sh:focusNode hr:Employee ;
sh:resultPath rdf:type ;
sh:value rdfs:Class ;
sh:resultMessage "Predicate is not allowed (closed shape)" ;
] .
[
a sh:ValidationResult ;
sh:resultSeverity sh:Violation ;
sh:sourceConstraintComponent sh:ClosedConstraintComponent ;
sh:sourceShape hr:ClassShape ;
sh:focusNode hr:Another ;
sh:resultPath rdf:type ;
sh:value rdfs:Class ;
sh:resultMessage "Predicate is not allowed (closed shape)" ;
] .
[
a sh:ValidationResult ;
sh:resultSeverity sh:Violation ;
sh:sourceConstraintComponent sh:ClosedConstraintComponent ;
sh:sourceShape hr:ClassShape ;
sh:focusNode hr:name ;
sh:resultPath rdf:type ;
sh:value rdf:Property ;
sh:resultMessage "Predicate is not allowed (closed shape)" ;
] .
[
a sh:ValidationResult ;
sh:resultSeverity sh:Violation ;
sh:sourceConstraintComponent sh:ClosedConstraintComponent ;
sh:sourceShape hr:ClassShape ;
sh:focusNode hr:hireDate ;
sh:resultPath rdf:type ;
sh:value rdf:Property ;
sh:resultMessage "Predicate is not allowed (closed shape)" ;
] .
[
a sh:ValidationResult ;
sh:resultSeverity sh:Violation ;
sh:sourceConstraintComponent sh:ClosedConstraintComponent ;
sh:sourceShape hr:ClassShape ;
sh:focusNode hr:jobGrade ;
sh:resultPath rdf:type ;
sh:value rdf:Property ;
sh:resultMessage "Predicate is not allowed (closed shape)" ;
] .
I am not sure why or what I need to do to resolve them. I believe all of the validation errors are related, so the solution to one should provide the solution to the rest.
What should my Shape file look like?
You confused the OR-statement, here is a working example following the SHACL docs on sh:or
#prefix hr: <http://learningsparql.com/ns/humanResources#> .
#prefix d: <http://learningsparql.com/ns/data#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
#prefix sh: <http://www.w3.org/ns/shacl#> .
hr:ClassShape
a sh:NodeShape ;
sh:targetSubjectsOf rdf:type;
sh:property [
sh:path rdf:type ;
sh:nodeKind sh:IRI ;
sh:or (
[sh:hasValue rdfs:Class;]
[sh:hasValue rdf:Property;]
)
];
sh:closed true ;
.
sh:closed only looks at directly declared properties of the shape. So it should work if you state
hr:ClassShape
sh:property [
sh:path rdf:type ;
] ;
sh:closed true ;
...
Closed shapes do not consider sh:or or other complex structures, see the details at
https://www.w3.org/TR/shacl/#ClosedConstraintComponent
I have to change the title of a couple of hundred files by adding the vdate from its header to its title.
If vdate = 19971222, then I want the name of that nc file to become rerun4_spindown_19971222.nc
I know I can find the vdate by ncdump -h filename (see example header below).
ncdump -h rerun4_1997_spindown_09191414_co2
netcdf rerun4_1997_spindown_09191414_co2 {
dimensions:
lon = 768 ;
lat = 384 ;
nhgl = 192 ;
nlevp1 = 96 ;
spc = 32896 ;
// global attributes:
:file_type = "Restart history file" ;
:source_type = "IEEE" ;
:history = "" ;
:user = " Linda" ;
:created = " Date - 20190919 Time - 134447" ;
:label_1 = " Atmospheric model " ;
:label_2 = " Library 23-Feb-2012" ;
:label_3 = " Lin & Rood ADVECTION is default" ;
:label_4 = " Modified physics" ;
:label_5 = " Modified radiation" ;
:label_6 = " Date - 20190919 Time - 134447" ;
:label_7 = " Linda " ;
:label_8 = " Linux " ;
:fdate = 19950110 ;
:ftime = 0 ;
:vdate = 19971222 ;
:vtime = 235800 ;
:nstep = 776158 ;
:timestep = 120. ;
However, then I have to manually open all the files and manually change the title of the file... of hundreds of files. I would prefer making a bash that can automatically do that.
I am sure there must be a more intelligent way to extract the vdate from the nc header, could you guys help me out?
Thank you!
In theory, something like that should work:
#! /bin/sh
for file in rerun4_*_spindown_* ; do
vdate=$(ncdump -h $file | awk '$1 == ":vdate" { print $3 }')
new_name="rerun4_spindown_$vdate.nc"
mv "$file" "$new_name"
done
I do not have access to netCDF files - more testing is needed.
Running Filemaker 13 on Mac OSX Yosemite.
We have a quicklook script that has, up until Yosemite, worked without issue. Normally, it takes a .doc/.docx file in the container field and opens it up in Quicklook.
However in Yosemite, it opens qlmanage, then causes Filemaker to freeze and crash.
Set Variable [ $file ; Value: ${database}::Container Field ]
Set Variable [ $path ; Value: Get ( Temporary Path ) & $file ]
Set Variable [ $script ; Value:
Let (
thepath = Middle( $path ; Position ($path ; "/" ; 1 ; 2 ); Length ($path) ;
"set p to POSIX path of " & Quote (thepath) &
"¶ do shell script \"qlmanage -p \" & quoted form of p" )
]
Export Field Contents [Database::Container Field ; "$path" ]
Perform Applescript [ $script ]
Can anyone give me some ideas on what might be going wrong here?
Thanks
I succeeded with an edited version of your script using FileMaker Pro Advanced 14.0.2 running under OS X Yosemite 10.10.5 in a demo file that looked like this:
Set Variable [ $_file ; Value: GetAsText ( Table::container ) ]
Set Variable [ $_fm_path ; Value: Get ( TemporaryPath ) & $_file ]
Set Variable [ $_as_path ; Value: Middle (
$_fm_path;
Position ( $_fm_path; "/" ; 1 ; 2 ) ;
Length ( $_fm_path) )
]
Set Variable [ $_script ; Value: List (
"set p to POSIX path of " & Quote ( $_as_path ) ;
"do shell script \"qlmanage -p \" & quoted form of p" )
]
Export Field Contents [ Table::container ; “$_fm_path” ]
Perform AppleScript [ $_script ]
Exit Script []
The primary differences between this and what you showed are:
I used a direct reference to the table name. I'm actually not sure what ${database} refers to. Perhaps Get ( FileName )?
I stored the AppleScript path in a variable for easier debugging.
If this doesn't work, I'd work with the recommendation I gave about testing the execution of the contents of $_script in Script Editor and the contents of the shell's p variable in Terminal.