At first I generated the field with 7 character, when I got the below error, I deleted; application, application group, and folder.Then I re-generated , application, folder and group with the field with 33 character.
I still have the below error while loading afp file to on-demand.
Any idea?
Error:
Row 1: The string "33333-5109741" has a length of 13 and the field has a maximum length of 7
I deleted all definitions, re-generated, appgroup, app and folder. Then with afp file I genereated index, resource and load the afp successfully.
Related
Power Query sourcing multiple Excel files from a folder.
Files are monthly transactions. The month and year are part of the file names. When the next month comes, new files (in the same format of course, but with new file names) replace the previous ones in the source folder. Having the new file names causes the query to fail on refresh in the following way.
When the files are combined and displayed to begin the transformations, the files names constitute a column of data (named Source). One of my steps in transforming the data is to “use first row as headers”; at this point the first file name in that Source column becomes its column header name.
The problem is that when files having new names replace the previous ones, that column name is no longer found, since the row promoted to be the column header is the name of a new file. PQ is looking for a column header having the original file name and doesn’t find it, so subsequent transformations using that column cause errors.
The error message is: “[Expression.Error] The column ‘[OriginalFileName]’ of the table wasn’t found.”
Basically, that original file name takes on a permanent role as a column name that is part of the query.
I successfully managed to get around the problem by manually renaming all the columns instead of promoting the first data row to be the column headers. Now files with new names are processed without complaint. But this solution is clunky and I would like to keep the step of promoting the first row to be the header.
Does anyone know how to overcome this problem?
I've been working with Oracle UCM.
All I have to do is to scan some documents, copy those pdf files in the Oracle Content Server, and then I should have access to the site and be able to search those files with their respectives names.
So far so good, but here's where things comes ugly.
Once I'm done searching one file, the UCM site doesn't show me the real name, the one that I gave to the scanned pdf. The site shows the name "sitios" ( "sites" in spanish, I'm in a latin country) instead of the name I gave it in the first place.
Usually files will be stored in two places in file system .
One will be vault location . File will be stored as it is .
Usually file name will be
dID.extension
.
Second will be in weblayout location . File will be stored as web view able version .
Usually file will be stored as
dDocName.extension
If you want to know the original file name in UCM search result , then try to get the
dOriginalName
metadata
I am trying to run alenka (https://github.com/antonmks/Alenka) by loading a custom table test.tbl and fire select queries on it.
It works fine with 3 or 4 rows.
But when I increase number of entries beyond 6 or 10 rows, it does not show any error while loading(./alenka load_test.sql), however when i run query(./alenka testquery.sql), it gives an error:
terminate called after throwing an instance of 'thrust::system::system_error' what(): invalid argument Aborted (core dumped)
---test.tbl---
1|2.12345|3|4|5|6|7|
1|2|3|4|5|6|7|
1|2|3|4|5|6|7|
1|2|3|4|5|6|7|
1|2|3|4|5|6|7|
1|2|3|4|5|6|7|
1|2|3|4|5|6|7|
1|2|3|4|5|6|7|
1|2|3|4|5|6|7|
This is the load_test.sql query
A := LOAD 'test.tbl' USING ('|') AS (var1{1}:int, var2{2}:int, var3{4}:int,
var4{5}:int,var5{6}:int, var6{7}:int, var7{8}:int);
STORE A INTO 'test' BINARY;
And testquery.sql
B := FILTER test BY id <= 19980902;
D := SELECT var2 AS var2
FROM B;
STORE D INTO 'mytest.txt' USING ('|');
Can someone explain, what is the reason for such error?
Thank you
The problem was raised due to minor errors which summed up to this confusion.
When a load command is fired on alenka, it creates binary files containing data from each column of the table.
These files will be overwritten if they are load again, however if the column names are changed it would create a new file along side the old ones.
So its a good idea to delete those files after renaming columns in a table in-order to avoid using them again.
Hence, I got this error because I had loaded data with different column names earlier and forgot to delete those files( test.id*) files from its folder.
Along with that, I also committed one more blunder of filtering it with "id" instead of 'var1' in query(testquery.sql) file.
Since the id files had 9 entries(from the previous schema), it ran perfectly for 9 rows but when the database size increased beyond that ,the thrust library threw system error.
Hope this helps someone from wasting time like I did.
I am using makemsi's command 'Mergemodule' to include Microsoft's redistribuitable 'policy_8_0_Microsoft_VC80_DebugCRT_x86_x64.msm' to my msi.
The ppwizard preprocessor gives me error 'The underlying OLE container used to store binary data has has a key limit of 62 characters.This was exceeded by 2 character(s), the key '_MAKEMSI_Cabs.policy_8_0_Microsoft_VC80_DebugCRT_x86_x64.msm.cab'
I believe this happens because binary data is stored inside msi as OLE Stream with key created by concatenating the table name and the values of the record's primary keys using a period delimiter.The Limit for this key is 62 characters which exceeds here with '_MAKEMSI_Cabs.policy_8_0_Microsoft_VC80_DebugCRT_x86_x64.msm.cab' where '_MAKEMSI_Cabs' is the table Name and 'policy_8_0_Microsoft_VC80_DebugCRT_x86_x64.msm.cab' is value.
Can anyone please suggest a work around for this. Thanks in advance.
I want to know which is best strategy to aboard the following problem in Talend:
I need to load data from a set of delimited files that are stored in a directory with names like (SAMPLE1.DAT, SAMPLE2.DAT, ... , SAMPLEX.DAT)
The target will be a table in a MySQL database
I have to load all data at once because after this task I need to work with all records in the same table
I'm a bit confused because I don't know if it possible in Talend. I was seeing the tFileInputDelimited component but I didn't find the way to solve it.
Thanks
To read several files from one directory, you would use the tFileList component. It allows you to specify a directory and a file name pattern. All files in the directory matching the pattern will be processed, one after the other.
You need to use an "Iterate" link from the tFileList component to those components that describe what you want to do with each file. In your case, you would start with a tFileInputDelimited component (read the file) and connect the main output of that to a tMysqlOutput component. The MySQL component will, by default, just append the data to an existing table, so that should get you the result you want.
In the tFileInputDelimited component, you would not use a fixed filename, but a variable filename which is set by the tFileList component for each iteration (your loop variable, so to speak of). The name of that loop variable can be seen in the "outline" view in the studio, usually in the bottom left corner.
You would use components tFileInputDelimited into tMap (optional) into tmysqlOutput
Step 1 : configure some components like this, except you will use the delimited file input:
Step 2 : configure the component settings for the delimited file, click the disk for the wizard :
Step 3 : configure your database by right clicking on Db Connection under metadata, then following wizard:
Step 4 : Right click on each component and choose Row > Main > drag to next step in flow.
Step 5 : Open your tMap and map the columns from the file schema to the database schema.
Step 6 : Run the job, it should work if you have followed all the wizard, if there are errors just hover over the red component and it usually describes errors pretty well. You will see as the job runs how many records it has transferred.
Step 7 : after you have made it that far, create a tfiledelimited output with the same schema as the input, right click on the input choose Row > Rejects and drag that to the new delimited output, this is where and records that are rejected by the tmap will be sent.