PeopleSoft Payroll Interface Field length - oracle

I have added a field to a Payroll Interface definition. I am using the delivered field TEXT254. The field where you define the length of the field in bytes (field definition table) is three characters, so it would appear that you can define the length as 999 bytes. The PI process fails when I set the length to 999 bytes, until I lowered it to 150 bytes. I am experimenting, with it, slowly increasing the value I'm wondering if anyone knows what the limit really is? Our PI takes 3 hours to run, so experimenting takes a long time.
edit - I cut down the runtime by getting rid of all but one company. The largest byte size that I seem to be able to get to run is 240. I did some research, and it looks like when you build your tables, Oracle will set the field to VARCHAR2(n*3) where n is the size of the field specified in AppDesigner. Sure enough, the script generated by the Project...Build sets my field to VARCHAR2(762).

This is what I found - the data that the PI exports is pretty much unlimited - in the PI_PARTIC_EXPT table, the EXPORT_ROW field is 250 characters. If the row you're exporting exceeds this, a new row is inserted with a new sequence number (export_seq), and the data is continued in the EXPORT_ROW field in this new row.
There is,however, a limit to an idividual field that the PI can handle, and that is 240 characters, so once I limited the field to 240 characters all was well.

Related

Neo4j query with condition on concatenated string is very slow

I have Person nodes with basic string fields such (firstName,lastName, fatherName,motherName) and trying to link nodes based on those fields.
A simple query where I compare motherName to concatenation of first name and last name such as
match(p1:Person) match (p2:Person) where p1.motherName=p2.firstName+' '+ p2.lastName return p1,p2 limit 500
takes around 1 hour , (removing ' ' from the concatenation does not make a difference ). Using match(p1:Person),(p2:Person) also makes no difference
While if comparing exact fields such as
match(p1:Person) match (p2:Person) where p1.motherName=p2.firstName return p1,p2 limit 500
only takes a few seconds.
I have noticed something peculiar regarding transaction memory which is that in the first query the estimatedUsedHeapMemory is always 2097152 and currentQueryAllocatedBytes is 64,
but I see the database is consuming around 7.5 GB of memory.
When running the 2nd query, the numbers for memory used for the heap and query are much bigger. Could it be that something special is causing the query to not be able to use as much memory as it needs as thus is slow?
I had successfully ran a query on all the data to link persons and fathers, that matches on exact fields, which took 2.5 hours. while the query for the mothers which needs to compare concatenated strings was still running after 9 hours with no result.
Query for father linking, which was successful.
CALL apoc.periodic.iterate(
"match (p1:Person) match(p2:Person) where p1.fatherName=p2.firstName and p1.lastName=p2.lastName and p1.dateOfBirth>p2.dateOfBirth return p1,p2",
"MERGE (p1)-[:CHILD_OF {parentRelationship:'FATHER'}]->(p2)",
{batchSize:5000})
I have 4 million nodes, my db size is 3.14 gb , these are my memory settings
NEO4J_server_memory_heap_max__size=5G
NEO4J_server_memory_heap_initial__size=5G
NEO4J_server_memory_pagecache_size=7G
I have tried to first the fast query on the data, so that it could load the data in the memory.
I tried concatenating without '', nothing helps.
I previously had a range index on firstname, which caused the father's query to also be super slow and also have the limit on used memory, I had to drop it in order to get that query to work
Below are my suggestions:
Index the field dateOfBirth on Person node.
String comparison always slows down when there is a large set of data. To compare strings directly try using apoc.util.md5() https://neo4j.com/labs/apoc/4.0/overview/apoc.util/apoc.util.md5/
This produces the hash value of the string passed which makes the comparison fast. So your query will be
CALL apoc.periodic.iterate( "match (p1:Person) match(p2:Person) where apoc.util.md5([p1.fatherName]) = apoc.util.md5([p2.firstName]) and apoc.util.md5([p1.lastName]) = apoc.util.md5([p2.lastName]) and p1.dateOfBirth > p2.dateOfBirth return p1,p2", "MERGE (p1)-[:CHILD_OF {parentRelationship:'FATHER'}]->(p2)", {batchSize:5000})
Hope this helps!

Firestore chat-app: Is this a valid document structure for multi-recipient messages?

Suppose a chat app has 10 million Firebase users, and hundreds of millions of messages.
I have a Firestore collection containing messages represented as documents in a time-series, and each of these messages may be received and viewed by up to 100 of these users. Please note, these users are not organized in stable groups, since each message may have a completely different set of users that receive it.
I need to be able to find, very efficiently (in terms of time and cost),
all messages after some specific time, directed to some specific user.
My first failed attempt would be to list the recipient users in a recipients array field, for example:
sender: user3567381
dateTime : 2019-01-24T20:37:28Z
recipients : [user1033029, user9273842, user8293413, user6273581]
However, that will not allow me to do my queries efficiently.
As a second failed attempt, since Firestore is schemaless, I thought about making each user a field, like this:
sender: user3567381
dateTime : 2019-01-24T20:37:28Z
user1033029 : true
user9273842 : true
user8293413 : true
user6273581 : true
Then, for example, if I want to know all messages for user 8293413 after 3:00 PM today, I could do it like this:
messages.where("user8293413", "==", true).where("dateTime", ">=", "2019-01-24T15:00:00Z")
This is a composite-index query, and it would need one index per user. Unfortunatelly, there is a limitation of 200 composite-indexes per database.
To solve this, my current attempt is to turn the date into values of the user fields, like this:
sender: user3567381
dateTime : 2019-01-24T20:37:28Z
user1033029 : 2019-01-24T20:37:28Z
user9273842 : 2019-01-24T20:37:28Z
user8293413 : 2019-01-24T20:37:28Z
user6273581 : 2019-01-24T20:37:28Z
Now, if I want to know all messages for user 8293413 after 3:00 PM today, I could do it like this:
messages.where("user8293413", ">=", "2019-01-24T15:00:00Z")
Note this is now a single-field index.
From the documentation I know that Firestore will create single-field indexes for all fields, so it means it will create indexes for user8293413 in specific.
This means the search will be fast, right? And that the number of reads will be kept to a minimum (one read per message).
However, since I have 10 million users, Firestore will have to create 10 million single-field indexes (assuming all users receive messages) for the entire database.
From the documentation Firestore has these limitations:
Maximum number of composite indexes for a database: 200
Maximum number of single-field index exemptions for a database: 200
Maximum number of index entries for each document: 40,000 (The number of index entries is the sum of the following for a document: The number of single-field index entries + The number of composite index entries)
Maximum size of an index entry: 7.5 KiB
Maximum sum of the sizes of a document's index entries: 8 MiB (The total size is the sum of the following for a document: The sum of the size of a document's single-field index entries + The sum of the size of a document's composite index entries)
Maximum size of an indexed field value: 1500 bytes (Field values over 1500 bytes are truncated. Queries involving truncated field values may return inconsistent results.)
By reading the above, these call my attention:
Maximum number of index entries for each document: 40,000
Maximum sum of the sizes of a document's index entries: 8 MiB
However, they state that the limitation is for each document, not for each database. And I only have millions of indexes for the database, not for each document.
Is that a problem? Will that many indexes affect performance? How about the storage cost of all these indexes? Is Firebase prepared at all for a large total number of indexes per database?
Although many months later, for any future users, it does seem like the first attempt would likely work the best.
Using a single static field for timestamp and a single static field for recipients means index will remain negligible and you won't have to think about them.
To find all messages for a user, which seems as though it's your goal here:
For example, if I want to know all messages for user 8293413 after
3:00 PM today, I could do it like this:
This would simply look like this in pseudocode:
firestore.collection('messages').where('recipient', 'array_contains', userId).where('time', '>', '3pm today'.get()
This should be easy enough on performance, Firebase is optimized for the operators it provides, e.g. '==', '>=', 'array_contains'

Cannot import solution because index size too large

I am experiencing the following error for a custom entity:
"Index size exceeded the size limit of 900 bytes. The key is too large. Try removing some columns or making the strings in string columns shorter."
I looked at the key and it previously had a max length of 300. I reduced it to 20 since it is a Phone Number entity, but it still fails to import with the following error above. I also increased it to 450 based on similar Dynamics questions I found online but no dice. How can I get around this error? Where should I be looking?
Is your field a find column in the quick find view?
If yes, that's the reason. Because for find columns automatically an index is being created and there are limitations reg. the max. number of characters.
This is a limitation on sql side.
https://learn.microsoft.com/en-us/sql/sql-server/maximum-capacity-specifications-for-sql-server?view=sql-server-2017

Oracle Bi Publisher Word Multiply

I am creating a confirmation letter using Bi Publisher with Word Add In.
I need a field to convert a varchar to number then multiply this by 75% and of course if the field is zero to equal zero.
For example my room_rate is the field and currently is 3,000.00 and I need to show the net amount which is always 75% because 25% is taxes so I need it to display 2250.
I have tried writing the below but it results in a '0'
I apologise for my lack of skills as I am just beginning.
Thanks in advance!!
If you have these XML fields per row:
<room_rate>3,000.00</room_rate>
<net_percent>75</net_percent>
You would want to use this for the field you want to calculate:
<?xdofx:to_number(room_rate) * (net_percent div 100)?>
You really should be sending the value as a number in the XML, and storing it in the database as a number for that matter.

Magento Terms & Conditions max character limit

I have a problem here that even after hours of searching with my friend Google, I'm still getting no results...
My Terms & Conditions are larger then the max character set of the Magento section for it.
Then I would like to know if one of you could please help me to locate the file and the line to edit to make the max character set biger and letting me put all my Terms & Condition without problem.
Thank you very much in advance for your time.
sincerely,
Nicolas
The T&C content is stored in the checkout_agreement table in a field named content
This field is assigned the datatype text and has a maximum length of around 64kB with actual content depending on how many bytes your UTF-8 encoded text uses.
You would need to change the datatype to longtext which has a maximum length of 16MB.
Testing this will be necessary to make sure no validation limits have been imposed on the entry template.
You can modify the table structure of table checkout_agreement by changing the data type of content field from TEXT to LONGTEXT to allow for more characters.

Resources