Text is truncating wrongly in Visual Foxpro database - visual-foxpro

In my application, I need to store a text in Visual Foxpro database. Actually, the max length of a text allowed in visual FoxPro is 255 characters. But in my application, it is allowing up to 88 characters only. I am connecting to the database using the FoxPro database driver. Version of the driver is VFPOLEDB.1

If the application is allowing more than 255 characters then it might be saving the data(which is greater than 88) to another table. Due to which in the application the total data is being displayed. Cross verify all the table related to that particular column.

Related

When I write a string to an image field in SQL it gets saved in UTF-16 LE encoding and I need it to be UTF-8

I have about 1200 databases in SQL 2016 Enterprise in which documents are stored as BLOB's in image fields in the database. I migrated all the documents to our Document Management System and now I want to replace the files in the database with shortcuts to the corresponding files in the DMS. I tried it a few months ago and that went well. Now I run into an issue. When I use this command
convert(varbinary(max), <string value of shortcut>)
It get's written to the field. When I try to open the shortcut I get an error. If I create the same shortcut from our DMS it is exactly the same except for the encoding. My binary = UTF-16 little endian and the DMS shortcut is encoded in UTF-8. The filesize has also doubled, which is logical since UTF-16 uses two bytes for every character. When I change the encoding in Notepad++ my shortcut works.
I need my blob to be encoded in UTF-8. That is possible, I can upload a shortcut to the system that the database uses and then it's stored correctly. I can't change the collation of the table or field because this is a vendor database. It's a pretty old-fashioned system. Who uses Blob's in the first place and if you do, why image and not varbinary?
I'm not much of a programmer so any help would be greatly appreciated.
I tried updating the database to the latest client version (not SQL just the application). That didn't change anything.
I tried nvarchar but that doesn't work on image fields.

ETL : Informatica - While loading Flat file source data into Oracle target table, the truncate issue occurring

I am trying to load data from file into Relational target. Target DB is Oracle.
In source file we have data for one account is having special characters.
eg: Viswanathan^A# de
In our application we have length of this field is 50. So we have in Informatica also 50.
Other data are loaded properly without any issues. Those data doesn't have special characters.
Finally while loading the data, it is truncated like Viswanathan d. So the char e is not loaded. Because of that the application has rejected this record.
I would like to know how to see and set the code page is available for Target and source.
i think the issue is with data length or may be code page. Probably you are trying to insert unicode data (data with ascent - Dé). You can change below settings and try again.
you can change the code page of target like below screenshot. You can make it like unicode.
Change integration service mode to Unicode.
Make the length of the target column to varchar2(100 char). To store Unicode values you need double the size than ascii values.

How to increase visual studio load test transaction name length? currently it's truncated if more than 64 characters

How to increase visual studio load test transaction name length? currently it's truncated if more than 64 characters.
I think you cannot increase the size. According to the description of the WebLoadTestTransaction table in the SQL database, the name field is of type nvarchar (64). Suggesting that the name size limit is a basic property of the database.

Replace invalid character in oracle (by editing dmp file)

We have a portal written in php/mysql and an enterprise application based on Java EE and Oracle. Recently we found out that a certain Unicode character (0643 to be precise) is invalid (due to improper data entry by end users) in text columns and must be changed to another character (06A9).
In MySQL I simply changed the export file using a text editor's find and replace tool. But in oracle, the dmp file is a binary file and i have no idea about how to edit the dmp file.
How can I change the invalid character?
Is there an alternative to iterating through all text columns in all tables?
(I have saved that as a last resort!)
Editing an Oracle dump file may be possible but isn't practical; even if you could get in and change something you'd risk corrupting it, and I doubt Oracle support would be impressed. (See this AskTom question for example).
If you're using data pump and you know which column(s) the data is in you might be able to use the REMAP_DATA parameter to change it on the fly, or the QUERY parameter to skip the data, but it doesn't sound like you're in that situation. You could potentially add temporary constraints to the relevant column(s) to block the value, so import would reject (and log) the affected rows, but that's painful and messy.
If you do have to check all columns on all tables, this link may be helpful.

Unicode support differences in Oracle jdbc driver

I have replaced ojdbc14.jar with ojdbc5-11.2.0.2.0.jar in my Java project because I need to move the support to Java 1.5 but since then all the unicode characters only allow half the length specified.
I use java.sql.DatabaseMetaData to get the table columns and their column size and validate their sizes. And I halved the size when the data types are unicode. It worked before but after I changed the driver the logic broke.
For example, a column was specified as NVARCHAR(200) before I can enter 200 characters but now I can only enter 100 characters.
My Oracle database is using "AL16UTF16" as its character set. Are there any differences in terms of unicode support between ojdbc14 and ojdbc5?

Resources