I have an application that was originally written in Borland C++ and used a Blowfish algorithm implemented in the TurboPower LockBox component .
This application has now been ported to C#. Currently I call a Borland C++ dll that uses this algorithm. However, when running the application on a 64-bit OS, I get errors whenever attempting to use this dll. If I compile the application as 32-bit, everything works, but we want to have this application work as a 64-bit app. As far as I can tell, that means I need a .Net Blowfish algorithm that works like the C++ one.
I found Blowfish.Net and it looks promising. However, when I use the same key and text the encrypted results do not match. I did find out the C++ dll uses the BlowfishECB algorithm. It also converts the result to Base 64, which I have also done.
Any help with this would be appreciated. Here is some test code in C#.
//Convert the key to a byte array. In C++ the key was 16 bytes long
byte[] _key = new byte[16];
Array.Clear(_key, 0, _key.Length);
var pwdBytes = System.Text.Encoding.Default.GetBytes(LicEncryptKey);
int max = Math.Min(16, pwdBytes.Length);
Array.Copy(pwdBytes, _key, max);
//Convert the string to a byte[] and pad it to to the 8 byte block size
var decrypted = System.Text.Encoding.ASCII.GetBytes(originalString);
var blowfish = new BlowfishECB();
blowfish.Initialize(_key, 0, _key.Length);
int arraySize = decrypted.Length;
int diff = arraySize%BlowfishECB.BLOCK_SIZE;
if (diff != 0)
{
arraySize += (BlowfishECB.BLOCK_SIZE - diff);
}
var decryptedBytes = new Byte[arraySize];
Array.Clear(decryptedBytes, 0, decryptedBytes.Length);
Array.Copy(decrypted, decryptedBytes, decrypted.Length);
//Prepare the byte array for the encrypted string
var encryptedBytes = new Byte[decryptedBytes.Length];
Array.Clear(encryptedBytes, 0, encryptedBytes.Length);
blowfish.Encrypt(decryptedBytes, 0, encryptedBytes, 0, decryptedBytes.Length);
//Convert to Base64
string result = Convert.ToBase64String(encryptedBytes);
It won't compatible with your TurboPower LockBox data.
I'd suggest that you provide a utility to do the data migration by decoding using LockBox in C++ (32-bit), outputting to temp files/tables and re-encoding using Blowfish.Net and C# (64-bit).
This data migration is done once before any upgrade to the .NET version, then it's all compatible with it.
Since you're changing the format: you could also change the format and omit the Base64 conversion by storing binary files/BLOBs, other ideas may also be useful like applying multiple encryptions, or replacing Blowfish by something else.
Related
Using AES library i am trying to send encrypted data from arduino side to raspberry pi side.The encrypted data that is being printed on the arduino serial monitor is not the same as what is being printed on the raspberry side.
Maybe it is the decoding problem.
Also while decrypting on the raspberry pi side it gives an error saying "the input text must be multiple of 16 in length", when i pad the input( temperature data) with zeroes it still gives the same error message.
I have tried using 'utf-8' and 'iso-8859-1' for decoding but still it doesnt show the same decrypted data.
PYTHON CODE :
from Crypto.Cipher import AES
ser=serial.Serial(' /dev/ttyS0',9600)
st=ser.readline()
st1=st.decode('utf-8')
obj = AES.new('This is a key123', AES.MODE_CBC, 'This is an IV456')
ciphertext = obj.encrypt(message)
obj2 = AES.new('This is a key123', AES.MODE_CBC, 'This is an IV456')
obj2.decrypt(ciphertext)
ARDUINO CODE :
void aesTest (int bits)
{
aes.iv_inc();
byte iv [N_BLOCK] ;
int plainPaddedLength = sizeof(chartemp) + (N_BLOCK - ((sizeof(chartemp)-1) % 16));
byte cipher [plainPaddedLength];
byte check [plainPaddedLength];
aes.set_IV(myIv);
aes.get_IV(iv);
aes.do_aes_encrypt(chartemp,sizeof(chartemp),cipher,key,bits,iv);
aes.set_IV(myIv);
aes.get_IV(iv);
aes.printArray(cipher,(bool)false); //print cipher with padding
String cipher1=String((char*)cipher);
myserial.println(cipher1);
}
HERE chartemp is the temperature that from LM35 IC converted to characted array.
I expect the output on the raspberry pi side to be decrypted properly
Encrypted data is a sequence of pseudo-random bytes. It is not a valid UTF-8 string.
This line is a bit dodgy, but probably technically "works:"
String cipher1=String((char*)cipher);
But this line is incorrect:
st1=st.decode('utf-8')
You can't take random data and decode it as utf-8. You either need to send and receive the data as just a string of bytes, or encode the data into a string, such as with Base64. I suspect you'll be more comfortable with the latter, so look at Base64 in Java and base64 in Python.
I am trying to register a customer with his mobile number. I am storing the mobile number as encrypted mobile number and also I am maintaining a session to store this encrypted mobile number. Once I come out of the application and try to log in with the same mobile number, my session goes off. So I am not able to take encrypted mobile number from session.
Is there any way that I can create an encryption mechanism to provide the same encrypted output every time for the same mobile number?
This is the encryption mechanism I am using.
public encrypt_mobile(mobile): Observable<any> {
var salt = crypto.lib.WordArray.random(128 / 8);
var key = crypto.PBKDF2("123", salt, {
256: 256 / 32,
100: 100
});
var iv = crypto.lib.WordArray.random(128 / 8);
var encrypted = crypto.AES.encrypt(mobile, key, {
// instead of message try some string or “9876543210”
iv: iv,
padding: crypto.pad.Pkcs7,
mode: crypto.mode.CBC
});
var encrypted_mob = salt.toString() + iv.toString() +
encrypted.toString();
console.log("encrypted : ", encrypted_mob);
return encrypted_mob;
}
You are using CBC mode with random IV.
mode: crypto.mode.CBC
Actually, that is better because it is probabilistic encryption. But CBC mode prevents comparison on encrypted data.
You should use ECB mode of operation to achieve equality test on encrypted data without decryption.
mode: CryptoJS.mode.ECB
The ECB mode don't use/require an IV. However, keep in mind that ECB mode leaks information, See the penguin from Wikipedia.
I'm using .NET Core 2 with the System.Data.OracleClient package published some weeks ago here: https://www.nuget.org/packages/System.Data.OracleClient/
I can read numbers, dates and normal English characters. But not Chinese. Probably a lot of other non-western characters.
Here's a sample program to illustrate the error:
using System;
using System.Text;
using System.Diagnostics;
using System.IO;
using System.Data.OracleClient;
namespace OracleConnector
{
class Program
{
static void Main()
{
TestString();
return;
}
private static void TestString()
{
string connStr = "Data Source = XE; User ID = testuser; Password = secret";
using (OracleConnection conn = new OracleConnection(connStr))
{
conn.Open();
var cmd = conn.CreateCommand();
cmd.CommandText = "select 'some text in English language' as a, '储物组合带门/抽屉, 白色 卡维肯, 因维肯 白蜡木贴面' as b from dual";
var reader = cmd.ExecuteReader();
reader.Read();
string sEnglish = reader.GetString(0);
string sChinese = reader.GetString(1);
Trace.WriteLine("English from db: " + sEnglish);
Trace.WriteLine("Chinese from db: " + sChinese);
Trace.WriteLine("Chinese from the code: 储物组合带门 / 抽屉, 白色 卡维肯, 因维肯 白蜡木贴面");
}
}
}
}
It outputs this:
English from db: some text in English languageဂ
Chinese from db: ¿¿¿¿¿¿/¿¿, ¿¿ ¿¿¿, ¿¿¿ ¿¿¿¿¿e
Chinese from the code: 储物组合带门 / 抽屉, 白色 卡维肯, 因维肯 白蜡木贴面
As you can see, Chinese characters from normal code works. But not when it comes from the database. Also, the last character in the English text is some messed up thing. I've also tried the corresponding Mono nuget package with the same result.
Anyone have any clue how to fix this?
Edit: Tried adding Unicode=True to the connection string but Chinese characters still doesn't work.
This is a problem with the System.Data.OracleClient DLL. I am having the same problem where 2, 3, or even 4-byte Unicode characters are getting tacked to the end of my strings.
Switching to Mono.Data.OracleClientCore helped slightly, but I still got some odd characters at the end of some strings (Unicode backspace and backslash).
I just tried the following library, and it seems to work for my needs (so far):
https://github.com/ericmend/oracleClientCore-2.0
You will need to re-compile for Windows (change to #define OCI_WINDOWS in OciCalls.cs). Will update this answer if I find that it doesn't continue to work.
Still, I think that we'll have to wait for Oracle to release their .NET Core supported solution for any sort of production ready library.
Please try
Environment.SetEnvironmentVariable ("NLS_LANG",".UTF8");
before creation of the connection-Object.
The System.Data.OracleClient-Implementations uses external Oracle libraries, which assumes (at least on Windows) the ANSI-Charset.
Setting the NLS_LANG-Environmentvariable informs the Oracle-Libs that you want the UTF8-Encoding.
(much) more Details on the NLS_LANG-FAQ-Page:
http://www.oracle.com/technetwork/database/database-technologies/globalization/nls-lang-099431.html
Append ";Unicode=True" to connectionstring and add Environment.SetEnvironmentVariable ("NLS_LANG",".UTF8"); before create connection
string conn = "DATA SOURCE=hostname.company.org:1521/servicename.company.org;PASSWORD=XYZ;USER ID=ABC;Unicode=True"
Environment.SetEnvironmentVariable("NLS_LANG", ".UTF8");
using (DbConnection conn = create_connection(app_conn))
{
//...
}
In an attempt to recreate the getenvironment(..) C function of _winapi.c (direct link) in plain python using ctypes, I'm wondering how the following C code could be translated:
buffer = PyMem_NEW(Py_UCS4, totalsize);
if (! buffer) {
PyErr_NoMemory();
goto error;
}
p = buffer;
end = buffer + totalsize;
for (i = 0; i < envsize; i++) {
PyObject* key = PyList_GET_ITEM(keys, i);
PyObject* value = PyList_GET_ITEM(values, i);
if (!PyUnicode_AsUCS4(key, p, end - p, 0))
goto error;
p += PyUnicode_GET_LENGTH(key);
*p++ = '=';
if (!PyUnicode_AsUCS4(value, p, end - p, 0))
goto error;
p += PyUnicode_GET_LENGTH(value);
*p++ = '\0';
}
/* add trailing null byte */
*p++ = '\0';
It seems that the function ctypes.create_unicode_buffer(..) (doc, code) is doing something quite close that I could reproduce if only I could have an access to Py_UCS4 C type or be sure of its link to any other type accessible to python through ctypes.
Would c_wchar be a good candidate ?, but it seems I can't make that assumption, as python 2.7 could be compiled in UCS-2 if I'm right (source), and I guess windows is really waiting fo UCS-4 there... even if it seems that ctypes.wintypes.LPWSTR is an alias to c_wchart_p in cPython 2.7 (code).
For this question, it is safe to make the assumption that the target platform is python 2.7 on Windows if that helps.
Context (if it has some importance):
I'm in the process of delving for the first time in ctypes to attempt a plain python fix at cPython 2.7's bug hitting windows subprocess.Popen(..) implementation. This bug is a won't fix. This bug prevents the usage of unicode in command line calls (as executable name or arguments). This is fixed in python 3, so I'm having a go at reverse implementing in plain python the actual cPython3 implementation of the required CreateProcess(..) in _winapi.c which calls in turn getenvironment(..).
This possible workaround was mentionned in the comments of this answer to a question related to subprocess.Popen(..) unicode issues.
This doesn't answer the part in the title about build specifically UCS4 buffer. But it gives a partial answer to the question in bold and manage to create a unicode buffer that seems to work on my current python 2.7 on windows: (so maybe UCS4 is not required).
So we are here taking the assumption that c_wchar is what windows require (if it is UCS4 or UCS2 is not so clear to me yet, and it might have no importance, but I recon having a very light confidence in my knowledge here).
So here is the python code that reproduces the C code as requested in the question:
## creation of buffer of size totalsize
wenv = (c_wchar * totalsize)()
wenv.value = (unicode("").join([
unicode("%s=%s\0") % (key, value)
for k, v in env.items()])) + "\0"
This wenv can then be fed to CreateProcessW and this seems to work.
I am faced with a large (~ 18 GB) file, exported from SQL Server as a Unicode text file, which means its encoding is UTF-16 (little endian). The file is now stored in a computer running Linux, but I have not figured out a way to convert it to UTF-8.
At first I tried using iconv, but the file is too large for that. My next approach was using split and converting the files one by one, but that didn't work either - there were a lot of errors during the conversions.
So, any ideas on how to convert this to UTF-8? Any help will be much appreciated.
Since you're using SQL server, I assume your platform is Windows. In the simplest case you can write quick an dirty .NET application, which reads the source line-by-line and writes the converted file as it goes. Something like this:
using System;
using System.IO;
using System.Text;
namespace UTFConv {
class Program {
static void Main(string[] args) {
try {
Encoding encSrc = Encoding.Unicode;
Encoding encDst = Encoding.UTF8;
uint lines = 0;
using (StreamReader src = new StreamReader(args[0], encSrc)) {
using (StreamWriter dest = new StreamWriter(args[1], false, encDst)) {
string ln;
while ((ln = src.ReadLine()) != null) {
lines++;
dest.WriteLine(ln);
}
}
}
Console.WriteLine("Converted {0} lines", lines);
} catch (Exception x) {
Console.WriteLine("Problem converting the file: {0}", x.Message);
}
}
}
}
Just open Visual Studio, start a new C# Console Application project, paste this code in there, compile, and run it from the command line. The first argument is your source file, the second argument is your destination file. Should work.