i have the following code:
'^XA^CI28^CWZ,E:TT0003M_.FNT^FS^XZ^XA^FWN^FO70,50^A0,30,25^FH^FD'.function_parse($name).'^FS^FO70,90^BY2^B3,,100^FD' . $product['value'] . '^FS^XZ';
function_parse($name):
$str = str_replace('Θ','_CE_98',$str);
$str = str_replace('Κ','_CE_9A',$str);
$str = str_replace('Μ','_CE_9C',$str);
$str = str_replace('Ν','_CE_9D',$str);
$str = str_replace('Ξ','_CE_9E',$str);
$str = str_replace('Ο','_CE_9F',$str);
$str = str_replace('ρ','_CF_81',$str);
$str = str_replace('ψ','_CF_88',$str);
$str = str_replace('ό','_CF_8C',$str);
$str = str_replace('Ό','_CE_8C',$str);
$str = str_replace('ύ','_cf_8d',$str);
$str = str_replace('Ύ','_ce_8e',$str);
$str = str_replace('ώ','_cf_8e',$str);
$str = str_replace('Ώ','_ce_8f',$str);
$str = str_replace('Έ','_ce_88',$str);
$str = str_replace('Ί','_ce_8a',$str);
return $str;
I have problems with the above greek characters, and the replacement i have made, returns instead of chinese letters, a ? for each such letter. What should the replacement be for these fonts?
We use a gc420t zebra printer if it is of any help
Related
I want to turn Unicode text into pure ASCII encoding using escape sequences.
Input :Ɏɇ衳 outputs to ... "\u024E\u0247\u8873"
Basically the opposite of this.
$ echo -e "\u024E\u0247\u8873"
Ɏɇ衳
I want the encoding to stay in utf8, all I'm doing is changing forms.
I've Tried:
iconv -f utf8 -t utf8 $file
iconv -f utf8 -t utf16 $file
Your mentioned codes 024E, 0247, .. are called Unicode code points and are independent from UTF-8 or UTF-16.
If perl is your option, you can retrieve the codes with:
perl -C -ne 'map {printf "\\u%04X", ord} (/./g)' <<< "Ɏɇ衳"; echo
which outputs:
\u024E\u0247\u8873
Explanation
The perl code above is mostly equivalent to:
#!/usr/bin/perl
use utf8;
$str = "Ɏɇ衳";
foreach $chr ($str =~ /./g) {
printf "\\u%04X", ord($chr);
}
print "\n";
use utf8 specifies the string is encoded in UTF-8 (just because the string is embedded in the script).
($str =~ /./g) brakes the string into an array of characters.
foreach iterates over the array of characters.
ord returns the code point of the given character.
EDIT
If you want to auto-scale the number of digits considering the out-of-BMP characters, try instead:
#!/usr/bin/perl
use utf8;
$str = "Ɏɇ衳";
foreach $chr ($str =~ /./g) {
$n = ord($chr);
$d = $n > 0xffff ? 8 : 4;
printf "\\u%0${d}X", $n;
}
If you have that in a file you can use iconv.
iconv -f $input_encoding -t $output_encoding $file
check "man iconv" for more details
I have few lines of code in a file
(code has few new lines, tabs, string and pattern-string)
I want to get this content of file as a string value,
so that it can be sent as a string value of some parameter in json
{param1: "value1", code: "code-content-from-file-should-go-here"}
lets say file content is
function string.urlDecode(str)
if string.isEmpty(str) then return str end
str = string.gsub(str, "+", " ")
str = string.gsub(str, "%%(%x%x)", function(h) return string.char(tonumber(h, 16)) end)
str = string.gsub(str, "\r\n", "\n")
return str
end
which should get converted to (what I see here is newline, tabs, in general code formatting is preserved, " \ etc are escaped)
function string.urlDecode(str)\n if string.isEmpty(str) then return str end\n str = string.gsub(str, \"+\", \" \")\n str = string.gsub(str, \"%%(%x%x)\", function(h) return string.char(tonumber(h, 16)) end)\n str = string.gsub(str, \"\\r\\n\", \"\\n\")\n return str\nend
So that json becomes
{param1: "value1", code: "function string.urlDecode(str)\n if string.isEmpty(str) then return str end\n str = string.gsub(str, \"+\", \" \")\n str = string.gsub(str, \"%%(%x%x)\", function(h) return string.char(tonumber(h, 16)) end)\n str = string.gsub(str, \"\\r\\n\", \"\\n\")\n return str\nend"}
While conversion of file-content to string in above mentioned manner can be done
using sed (got from few related slackoverflow threads like How can I replace a newline (\n) using sed?),
but I will have to handle each scenario like newline, tabs, ", \, and if there are any other special characters that needs to be escaped (which I dont know)
Is there any bash command (or maybe python module) that can handle all such scenario's for code-content-from-file to string conversion?
As this sees like a common use case if someone wants to send code content in JSON
If content is in file.txt
function encode {
local input=$1
local output
for ((i=0;i<${#input};i+=1)); do
ic=${input:$i:1}
if [[ $ic = $'\n' ]]; then
oc='\n'
elif [[ $ic = '\' || $ic = '"' ]]; then
oc='\'$ic
# [[ $ic < $'\040' ]] # works only if LC_COLLATE=C or LC_ALL=C
elif (( $(printf "%d" "'$ic") < 32 )); then
oc='\0'$(printf "%02o" "'$ic")
else
oc=$ic
fi
output=$output$oc
done
echo "$output"
}
printf '{param1: "%s", code: "%s"}' "value1" "$(encode "$(<file.txt)")"
i have a file like below
[NAMES]
biren
bikash
dibya
[MAIL]
biren_k
bikash123
dibya008
my output should be like below
[NAMES]
[MAIL]
i tried the below code just to remove the lines between NAMES and MAIL, but it did not work.
sed -n '/NAMES/{p; :a; N; /MAIL/ba; s/.*\n//}; p' input.txt
Can anyone help please... i would prefer perl code if any...
NOTE: like [NAMES] and [MAIL] , i have a lot of headers in my actual file. here i have just shown two headers. I have to replace the contents below the headers(not all, only selected headers which are at random line numbers) with new contents. but first i nedd to delete the contents below them. Thats why i need my output like this. Any suggestions please...
You can modify sed as
$ sed '/\[NAMES\]/, /\[MAIL\]/ {/^\[/p; d}' input
[NAMES]
[MAIL]
biren_k
bikash123
dibya008
Please try this may be helpful on your question:
%hashes = (
"[NAMES]" => "<br/>kumar<br/>avi<br/><br/>\n",
"[MAIL]" => "<br/>biren_k<br/>bikash123<br/>dibya008<br/>\n"
);
my #arr = <DATA>;
foreach my $snarr(#arr)
{
chomp($snarr);
push(#newarr, "$snarr\n$hashes{$snarr}"), if( $hashes{$snarr} );
}
print #newarr;
__DATA__
[NAMES]
biren
bikash
dibya
[MAIL]
biren_k
bikash123
dibya008
Just replace the lines between my #erase = qw[ and ]; with HEADERS you meant to empty out.
#!/usr/bin/env perl
use strict;
use warnings;
push #ARGV, 'file.txt';
# here list out the HEADERS
# which content you wanna erase
my #erase = qw[
NAMES
MAIL
];
my %dump;
my $header;
# build a hash from your file
while (<>) {
if (/^\[([^\]]+)\]$/) {
$header = $1;
$dump{$header} = "";
next;
}
$dump{$header} .= $_ if $header;
}
# replace the content
# with empty string
foreach (#erase) {
$dump{$_} = "";
}
# now print it back to <STDOUT>
foreach (sort keys %dump) {
print "[$_]\n$dump{$_}\n";
}
I found solution to my problem here:
my #name_var = ();
while (<STDIN>)
{
last if ($_ =~ /^\n/ );
push(#name_var, $_);
}
my #mail_add = ();
while (<STDIN>)
{
last if ($_ =~ /^\n/ );
push(#mail_add, $_);
}
open(my $var, "input.txt") || die("Input File not found");
open(my $out, ">temp.txt") || die("Temp File not created");
while($line = <$var>)
{
# print $line;
if( $line =~ /\[NAMES\]/)
{
print $out $line;
print $out $name_var;
while(($line = <$var>) && ($line !~ /^\n/))
{
}
}
if( $line =~ /\[MAIL\]/)
{
print $out $line;
print $out $mail_add;
while(($line = <$var>) && ($line !~ /^\n/))
{
}
}
print $tcf_out $line;
}
close($var);
close($out);
open($var1,">input.txt") || die("failed to open\n");
open($out1,"<temp.txt") || die("failed to open\n");
while($fl = <$out1>)
{
print $var1 $fl;
}
close($var1);
close($out1);
Thank you all. I got the solution from stack overflow, perlmonk and few more sites related to perl.
I have this very long transliteration:
$text =~ tr/áàăâǎåǻäǟãȧǡąāȁȃɑʙƀɓƃćĉčċçȼƈɕʗďđðɖɗƌȡéèĕêěëėȩęēȅȇɇɛ/aaaaaaaaaaaaaaaaabbbbcccccccccdddddddeeeee/;
# Etc. (About 400 chars)
I want to split it into several transliterations since the resulting code would be easier to maintain:
$text =~ tr/áàăâǎåǻäǟãȧǡąāȁȃɑ/aaaaaaaaaaaaaaaaa/;
$text =~ tr/ʙƀɓƃ/bbbb/;
$text =~ tr/ćĉčċçȼƈɕʗ/ccccccccc/;
# Etc.
I believe that is going to slow things down, but I'd like to know for sure. This process runs about 1000 times per second on a pretty busy server.
Thanks.
You could build a transliterator:
my %translits = (
'áàăâǎåǻäǟãȧǡąāȁȃɑ' => 'a',
'ʙƀɓƃ' => 'b',
'ćĉčċçȼƈɕʗ' => 'c',
);
my $pat = '';
my $repl = '';
for (keys(%translit)) {
$pat .= $_;
$repl .= $translit{$_} x length($_);
}
my $tr1 = eval "sub { tr/\Q$pat\E/\Q$repl\E/ }" or die $#;
-or-
my $tr2 = eval "sub { \$_[0] =~ tr/\Q$pat\E/\Q$repl\E/ }" or die $#;
Then use it like this:
$tr1->() for $str;
-or-
$tr2->($str);
Of course, you could always use Text::Unidecode.
I would expect the second solution with three operations to be slower, because it re-scans characters in $text that have already been substituted.
Here is a benchmark:
use Benchmark qw(:all);
my $str = 'áàăâǎåǻäǟãȧǡąāȁȃɑʙƀɓƃćĉčċçȼƈɕʗďđðɖɗƌȡéèĕêěëėȩęēȅȇɇɛ/aaaaaaaaaaaaaaaaabbbbcccccccccdddddddeeeee';
my $count = -2;
cmpthese($count, {
'one tr' => sub {
$str =~ tr/áàăâǎåǻäǟãȧǡąāȁȃɑʙƀɓƃćĉčċçȼƈɕʗďđðɖɗƌȡéèĕêěëėȩęēȅȇɇɛ/aaaaaaaaaaaaaaaaabbbbcccccccccdddddddeeeee/;
},
'multi tr' => sub {
$str =~ tr/áàăâǎåǻäǟãȧǡąāȁȃɑ/aaaaaaaaaaaaaaaaa/;
$str =~ tr/ʙƀɓƃ/bbbb/;
$str =~ tr/ćĉčċçȼƈɕʗ/ccccccccc/;
$str =~ tr/ďđðɖɗƌȡ/ddddddd/;
$str =~ tr/éèĕêěëėȩęēȅȇɇɛ/eeeee/;
},
});
result:
Rate multi tr one tr
multi tr 1215538/s -- -81%
one tr 6271883/s 416% --
As we see, one tr is 5 times faster than multi-tr.
I have a .h file, among other things, containing data in this format
struct X[]{
{"Field", "value1 value2 value"},
{"Field2", "value11 value12 value232"},
{"Field3", "x y z"},
{"Field4", "a bbb s"},
{"Field5", "sfsd sdfdsf sdfs"};
/****************/
};
I have text file containing, values that I want to replace in .h file with new values
value1 Valuesdfdsf1
value2 Value1dfsdf
value3 Value1_another
sfsd sfsd_ewew
sdfdsf sdfdsf_ew
sdfs sfsd_new
And the resulting .h file will contain the replacements from the text file above. Everything else remains the same.
struct X[]{
{"Field1", "value11 value12 value232"},
{"Field2", "value11 value12 value232"},
{"Field3", "x y z"},
{"Field4", "a bbb s"},
{"Field5", "sfsd_ewew sdfdsf_ew sdfs_new"};
/****************/
};
Please help me come with a solution to accomplish it using unix tools: awk, perl, bash, sed, etc
cat junk/n2.txt | perl -e '{use File::Slurp; my #r = File::Slurp::read_file("junk/n.txt"); my %r = map {chomp; (split(/\s+/,$_))[0,1]} #r; while (<>) { unless (/^\s*{"/) {print $_; next;}; my ($pre,$values,$post) = ($_ =~ /^(\s*{"[^"]+", ")([^"]+)(".*)$/); my #new_values = map { exists $r{$_} ? $r{$_}:$_ } split(/\s+/,$values); print $pre . join(" ",#new_values) . $post . "\n"; }}'
Result:
struct X[]{
{"Field", "value1 Value1dfsdf value"},
{"Field2", "value11 value12 value232"},
{"Field3", "x y z"},
{"Field4", "a bbb s"},
{"Field5", "sfsd_ewew sdfdsf_ew sfsd_new"};
/****************/
};
Code untangled:
use File::Slurp;
my #replacements = File::Slurp::read_file("junk/n.txt");
my %r = map {chomp; (split(/\s+/,$_))[0,1]} #replacements;
while (<>) {
unless (/^\s*{"/) {print $_; next;}
my ($pre,$values,$post) = ($_ =~ /^(\s*{"[^"]+", ")([^"]+)(".*)$/);
my #new_values = map { exists $r{$_} ? $r{$_} : $_ } split(/\s+/, $values);
print $pre . join(" ",#new_values) . $post . "\n";
}
#!/usr/bin/perl
use strict; use warnings;
# you need to populate %lookup from the text file
my %lookup = qw(
value1 Valuesdfdsf1
value2 Value1dfsdf
value3 Value1_another
sfsd sfsd_ewew
sdfdsf sdfdsf_ew
sdfs sfsd_new
);
while ( my $line = <DATA> ) {
if ( $line =~ /^struct \w+\Q[]/ ) {
print $line;
process_struct(\*DATA, \%lookup);
}
else {
print $line;
}
}
sub process_struct {
my ($fh, $lookup) = #_;
while (my $line = <$fh> ) {
unless ( $line =~ /^{"(\w+)", "([^"]+)"}([,;])\s+/ ) {
print $line;
return;
}
my ($f, $v, $p) = ($1, $2, $3);
$v =~ s/(\w+)/exists $lookup->{$1} ? $lookup->{$1} : $1/eg;
printf qq|{"%s", "%s"}%s\n|, $f, $v, $p;
}
return;
}
__DATA__
struct X[]{
{"Field", "value1 value2 value"},
{"Field2", "value11 value12 value232"},
{"Field3", "x y z"},
{"Field4", "a bbb s"},
{"Field5", "sfsd sdfdsf sdfs"};
/****************/
};
Here's a simple looking program:
use strict;
use warnings;
use File::Copy;
use constant {
OLD_HEADER_FILE => "headerfile.h",
NEW_HEADER_FILE => "newheaderfile.h",
DATA_TEXT_FILE => "data.txt",
};
open (HEADER, "<", OLD_HEADER_FILE) or
die qq(Can't open file old header file ") . OLD_HEADER_FILE . qq(" for reading);
open (NEWHEADER, ">", NEW_HEADER_FILE) or
die qq(Can't open file new header file ") . NEW_HEADER_FILE . qq(" for writing);
open (DATA, "<", DATA_TEXT_FILE) or
die qq(Can't open file data file ") . DATA_TEXT_FILE . qq(" for reading);
#
# Put Replacement Data in a Hash
#
my %dataHash;
while (my $line = <DATA>) {
chomp($line);
my ($key, $value) = split (/\s+/, $line);
$dataHash{$key} = $value if ($key and $value);
}
close (DATA);
#
# NOW PARSE THOUGH HEADER
#
while (my $line = <HEADER>) {
chomp($line);
if ($line =~ /^\s*\{"Field/) {
foreach my $key (keys(%dataHash)) {
$line =~ s/\b$key\b/$dataHash{$key}/g;
}
}
print NEWHEADER "$line\n";
}
close (HEADER);
close (NEWHEADER);
copy(NEW_HEADER_FILE, OLD_HEADER_FILE) or
die qq(Unable to replace ") . OLD_HEADER_FILE . qq(" with ") . NEW_HEADER_FILE . qq(");
I could make it more efficient by using map, but that makes it harder to understand.
Basically:
I open three files, the original Header, the new Header I'm building, and the data file
I first put my data into a hash where the replacement text is keyed by the original text. (Could have done it the other way around if I wanted.
I then go through each line of the original header.
** If I see a line that looks like its a field line, I know that I might have to do a replacement.
** For each entry in my %dataHash, I do a substitution of the $key with the $dataHash{$key} replacement value. I use the \b to mark word boundries. This way, field11 is not substituted because I see field1 in that string.
** Now I write the line back to my new header file. If I didn't replace anything, I just write back the original line.
Once I finish, I copy the new header over the old header file.
This script should work
keyval is the file containing key value pairs
filetoreplace is the file containing data to be modified
The file named changed will contain the changes
#!/bin/sh
echo
keylist=`cat keyval | awk '{ print $1}'`
while read line
do
for i in $keylist
do
if echo $line | grep -wq $i; then
value=`grep -w $i keyval | awk '{print $2}'`
line=`echo $line | sed -e "s/$i/$value/g"`
fi
done
echo $line >> changed
done < filetoreplace
This might be kind of slow if your files are big.
gawk -F '[ \t]*|"' 'FNR == NR {repl[$1]=$2;next}{for (f=1;f<=NF;++f) for (r in repl) if ($f == r) $f=repl[r]; print} ' keyfile file.h