oracle TO_CHAR format (decimal always required) - oracle

I need to convert my oracle NUMBER into a string with this format: 999,999
I'm trying with TO_CHAR but I'm not having the correct output.
This is the expected behavior:
9 ---> 9,000
9,88 --> 9,880
0 --> 0,000
-1 --> -1,000
80 --> 80,000

Just use a format mask with TO_CHAR - if you are using , as decimal character:
TO_CHAR(-1, 'FM999G999G990D000') -> -1,000
TO_CHAR(9.88, 'FM999G999G990D000') -> 9,880
...
And make sure your format mask is long enough to fit for all possible length of the numeric string.

Related

Impala substr can't get utf8 character correctly

I am new to ETL and I was assigned with a task on sanitizing some sensitive information before giving the data to a client.
I am using HUE web client with Impala.
What I want to do is:
For example, a column info like '京客隆(三里屯店)', then I need to transform it into something like '京XXX店)' .
My query is:
select '京客隆(三里屯店)', concat(substr('京客隆(三里屯店)', 1, 3), 'XXX', substr('京客隆(三里屯店)', char_length('京客隆(三里屯店)') -6, 6));
But I get gibberish in the output:
'京客隆(三里屯店)' | concat(substr('京客隆(三里屯店)', 1, 3), 'xxx', substr('京客隆(三里屯店)', char_length('京客隆(三里屯店)') - 6, 6))
京客隆(三里屯店) | 京XXX�店�
The problem is that :
select '京客隆(三里屯店)', substr('京客隆(三里屯店)', char_length('京客隆(三里屯店)') -3 , 3);
output: 京客隆(三里屯店) ��
doesn't get the correct characaters. Why is that? I pasted the string in python shell and I can get the correct characters if I only take the last 3 bytes.
It turns out that I misunderstood the function substr.
substr(STRING a, INT start [, INT len]) :
It takes characters starting from (including) INT start. So for example my string '京客隆(三里屯店)' is 27 bytes long in total, and each utf8 char takes 3 bytes here. I need to take the last 3 bytes, which is the ) , then I need to write:
substr('京客隆(三里屯店), 27 - 2 ,3 ) .
It then gets the 25, 26, 27 3 bytes and display the char ) correctly.
Updated:
I was told to use :
SELECT regexp_replace('京客隆(三里屯店)', '(.)(.*)(.{2})', '\\1***\\3');
works like an charm :P.

Converting nginx uuid from hex to Base64: how is byte-order involved?

Nginx can be configured to generate a uuid suitable for client identification. Upon receiving a request from a new client, it appends a uuid in two forms before forwarding the request upstream to the origin server(s):
cookie with uuid in Base64 (e.g. CgIGR1ZfUkeEXQ2YAwMZAg==)
header with uuid in hexadecimal (e.g. 4706020A47525F56980D5D8402190303)
I want to convert a hexadecimal representation to the Base64 equivalent. I have a working solution in Ruby, but I don't fully grasp the underlying mechanics, especially the switching of byte-orders:
hex_str = "4706020A47525F56980D5D8402190303"
Treating hex_str as a sequence of high-nibble (most significant 4 bits first) binary data, produce the (ASCII-encoded) string representation:
binary_seq = [hex_str].pack("H*")
# 47 (71 decimal) -> "G"
# 06 (6 decimal) -> "\x06" (non-printable)
# 02 (2 decimal) -> "\x02" (non-printable)
# 0A (10 decimal) -> "\n"
# ...
#=> "G\x06\x02\nGR_V\x98\r]\x84\x02\x19\x03\x03"
Map binary_seq to an array of 32-bit little-endian unsigned integers. Each 4 characters (4 bytes = 32 bits) maps to an integer:
data = binary_seq.unpack("VVVV")
# "G\x06\x02\n" -> 167904839 (?)
# "GR_V" -> 1449087559 (?)
# "\x98\r]\x84" -> 2220690840 (?)
# "\x02\x19\x03\x03" -> 50534658 (?)
#=> [167904839, 1449087559, 2220690840, 50534658]
Treating data as an array of 32-bit big-endian unsigned integers, produce the (ASCII-encoded) string representation:
network_seq = data.pack("NNNN")
# 167904839 -> "\n\x02\x06G" (?)
# 1449087559 -> "V_RG" (?)
# 2220690840 -> "\x84]\r\x98" (?)
# 50534658 -> "\x03\x03\x19\x02" (?)
#=> "\n\x02\x06GV_RG\x84]\r\x98\x03\x03\x19\x02"
Encode network_seq in Base64 string:
Base64.encode64(network_seq).strip
#=> "CgIGR1ZfUkeEXQ2YAwMZAg=="
My rough understanding is that big-endian is the standard byte-order for network communications, while little-endian is more common on host machines. Why nginx provides two forms that require switching byte order to convert I'm not sure.
I also don't understand how the .unpack("VVVV") and .pack("NNNN") steps work. I can see that G\x06\x02\n becomes \n\x02\x06G, but I don't understand the steps that get there. For example, focusing on the first 8 digits of hex_str, why do .pack(H*) and .unpack("VVVV") produce:
"4706020A" -> "G\x06\x02\n" -> 167904839
whereas converting directly to base-10 produces:
"4706020A".to_i(16) -> 1191576074
? The fact that I'm asking this shows I need clarification on what exactly is going on in all these conversions :)

Wierd output characters (Chinese characters) when using Ruby to read / write CSV

I'm trying to print the first 5 lines from a set of large (>500MB) csv files into small headers in order to inspect the content more easily.
I'm using Ruby code to do this but am getting each line padded out with extra Chinese characters, like this:
week_num type ID location total_qty A_qty B_qty count਍㌀㐀ऀ猀漀爀琀愀戀氀攀ऀ㄀㤀㜀ऀ䐀䔀开伀渀氀礀ऀ㔀㐀㜀㈀ ㌀ऀ㔀㐀㜀㈀ ㌀ऀ ऀ㤀㄀㈀㔀㌀ഀ
44 small 14 A 907859 907859 0 550360਍㐀㄀ऀ猀漀爀琀愀戀氀攀ऀ㐀㈀㄀ऀ䐀䔀开伀渀氀礀ऀ㌀ ㈀㄀㜀㐀ऀ㌀ ㈀㄀
The first few lines of input file are like so:
week_num type ID location total_qty A_qty B_qty count
34 small 197 A 547203 547203 0 91253
44 small 14 A 907859 907859 0 550360
41 small 421 A 302174 302174 0 18198
The strange characters appear to be Line 1 and Line 3 of the data.
Here's my Ruby code:
num_lines=ARGV[0]
fh = File.open(file_in,"r")
fw = File.open(file_out,"w")
until (line=fh.gets).nil? or num_lines==0
fw.puts line if outflag
num_lines = num_lines-1
end
Any idea what's going on and what I can do to simply stop at the line end character?
Looking at input/output files in hex (useful suggestion by #user1934428)
Input file - each character looks to be two bytes.
Output file - notice the NULL (00) between each single byte character...
Ruby version 1.9.1
The problem is an encoding mismatch which is happening because the encoding is not explicitly specified in the read and write parts of the code. Read the input csv as a binary file "rb" with utf-16le encoding. Write the output in the same format.
num_lines=ARGV[0]
# ****** Specifying the right encodings <<<< this is the key
fh = File.open(file_in,"rb:utf-16le")
fw = File.open(file_out,"wb:utf-16le")
until (line=fh.gets).nil? or num_lines==0
fw.puts line
num_lines = num_lines-1
end
Useful references:
Working with encodings in Ruby 1.9
CSV encodings
Determining the encoding of a CSV file

Fortran 90 - reading format

I'm trying to read that string in a formatted file: " PARAMETER (NE_M=10,NL_M=12)".
I want to replace the 12 by 11.
I tried to read the sting like this :
integer :: i
character(len=30) :: text
10 format(6x,24a,2i) text,i
read(text_data,10) text, i
write(6,100) text, 11
But it doesn't work. Any idea?
The reading and writing you have written will not do what you want. The input statement you presented for reading is 33 characters wide, and your formatting only accounts for 32 of those characters and your write will not contain the closing ).
Consider the following code, if you do not need to capture the 12 in the input.
program test
character(len=30) :: text
101 format(a30, i2, ')')
open(unit=10, file='testinput.f')
read(10,101) text
write(*,101) text, 11
end program
and the input (with 6 leading spaces) in file testinput.f:
PARAMETER (NE_M=10,NL_M=12)
when run, produces the output:
% ./test
PARAMETER (NE_M=10,NL_M=11)
This code was compiled and tested with GNU gfortran 4.8.2.
assuming test_data is a unit number of an open file and 100 is a
format statement number.
integer :: i
character(len=30) :: text
10 format(6x,a24,i2)
read(text_data,10) text, i
write(6,100) text(:24), i
fixing those other issues:
integer :: i
character(len=30) :: text
open(unit=20,file='filename')
10 format(6x,a24,i2)
read(20,10) text, i
write(6,10) text(:24), i

Decrypting REG_NONE value in Registry

http://i.stack.imgur.com/xaP9s.jpg
Referring to the screenshot above as I'm not able to attach screenshot,
I want to convert the Filesize value which is in Hex to a String i.e. human readable format
The actual decimal value is 5.85 MB
While converting, I am not getting the actual value i.e. 5.85
Can any one suggest how do I convert the values.
I have a set of these hex values and want to convert them into a human readable format.
Each pair of hexadecimal numbers represents a byte, while the lowest value bytes are placed to the left:
0x00 -> 0
0xbb -> 187
0x5d -> 93
0*256^0 + 187*256^1 + 93*256^2 + 0*256^3 + 0*256^4 + 0*256^5 + 0*256^6 + 0*256^7
= 6142720
6142720 / 1024^2
= 5.85815
This storage format is called little-endian: https://en.wikipedia.org/wiki/Little-endian#Little-endian

Resources