UTF-8 Encoding in console output - Jenkins - bash

I already checked so many post related to that subject in stack overflow,
but nothing helped.
So the issue is that i would like to see some polish chars on my console output,
but instead of that chars i see question marks
echo '???'
So i have jenkins version 2.323.
I already added following params in global vars for jenkins:
JAVA_TOOL_OPTIONS="-Dfile.encoding=UTF-8"
LANG=UTF-8
also added those params to all nodes as a env variables
Is anyone able to support in that case ?
EDIT:
I am executing following commands in my shell execution step:
echo "ąęĆ"
And after when i run build i see:
[sth] $ /bin/sh -xe /tmp/jenkins2984595068288236962.sh
+ echo ???
???
Finished: SUCCESS
EDIT:
I also read that maybe i should add encoding to sh, so i did like that:
+ /bin/sh encoding: UTF-8 script: echo "???"
/bin/sh: 0: Can't open encoding:
Build step 'Execute shell' marked build as failure
Finished: FAILURE
But seems like syntax is bad. Forgive me but im not so experienced in such a things.
EDIT:
When im executing locale charmap i can see
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
ANSI_X3.4-1968
seems like ansi still is used.
Thanks,
J

Related

WebStorm file watcher does not work after Mac update

This is the current settings:
Arguments: $FileName$ $ProjectFileDir$/css/$FileNameWithoutExtension$.css --source-map true --output-style expanded
Output paths to refresh : $ProjectFileDir$/css/$FileNameWithoutExtension$.css:$ProjectFileDir$/css/$FileNameWithoutExtension$.css.map
It's coming out like this. Why is that?
/usr/local/lib/node_modules/node-sass/bin/node-sass aaa.scss /Users/aaa/WebstormProjects/aaa/css/aaa.css --source-map true --output-style expanded
env: node: No such file or directory
Process finished with exit code 127
I fixed it by reinstalling the MacOS.

ASCII incompatible encoding with normal run, not in debug mode

I'm really confused on this one, and maybe it's a bug in Ruby 2.6.2. I have files that were written as UTF-8 with BOM, so I'm using the following:
filelist = Dir.entries(#input_dirname).join(' ')
filelist = filelist.split(' ').grep(/xml/)
filelist.each do |indfile|
filecontents_tmp = File.read("#{#input_dirname}/#{indfile}", :encoding =>'bom|utf-8')
puts filecontents_tmp
end
If I put a debug breakpoint at the puts line, my file is read in properly. If I just run the simple script, I get the following error:
in `read': ASCII incompatible encoding needs binmode (ArgumentError)
I'm confused as to why this would work in debug, but not when run normally. Ideas?
Have you tried printing the default encoding when you run the file as opposed to when you debug the file? There are 3 ways to set / change the encoding in Ruby (that I'm aware of), so I wonder if it's different between running the file and debugging. You should be able to tell by printing the default encoding: puts Encoding.default_external.
As for actually fixing the issue, I ran into a similar problem and found this answer which said to add bin mode as an option to the File.open call and it worked for me.

what is the encoding of the subprocess module output in Python 2.7?

I'm trying to retrieve the content of a zipped archive with python2.7 on 64bit windows vista. I tried by making a system call to 7zip (my favourite archive manager) using the subprocess module:
# -*- coding: utf-8 -*-
import sys, os, subprocess
Extractor = r'C:\Program Files\7-Zip\7z.exe'
ArchiveName = r'C:\temp\bla.zip'
output = subprocess.Popen([Extractor,'l','-slt',ArchiveName],stdout=subprocess.PIPE).stdout.read()
This works fine as long as the archive content contains only ascii filenames, but when I try it with non-ascii I get an encoded output string variable where ä, ë, ö, ü have been replaced by \x84, \x89, \x94, \x81 (etcetera). I've tried all kinds of decode/encode calls but I'm just too inexperienced with python (and generally too stupid) to reproduce the original characters with umlaut (which is required if I would like to follow-up this step with e.g. an extraction subprocess call to 7z).
Simply put my question is: How do I get this to work also for archives with non-ascii content?
... or to put it in a more convoluted way: Is the output of subprocess always of a fixed encoding or not?
In the former case -> Which encoding is it?
In the latter case -> How can I control or uncover the encoding of the output of subprocess? Inspired by similar questions on this blog I've tried adding
import codecs
sys.stdout = codecs.getwriter('utf8')(sys.stdout)
and I've also tried
my_env = os.environ
my_env['PYTHONIOENCODING'] = 'utf-8'
output = subprocess.Popen([Extractor,'l','-slt',ArchiveName],stdout=subprocess.PIPE,env=my_env).stdout.read()
but neither seems to alter the encoding of the output variable (or to reproduce the umlaut).
You can try using the -sccUTF-8 switch from 7zip to force the output in utf-8.
Here is ref page: http://en.helpdoc-online.com/7-zip_9.20/source/cmdline/switches/scc.htm

How do I run a non-ASCII/Unicode shell command from Ruby on Windows?

I cannot figure out the proper way to encode a shell command to run from Ruby on Windows. The following script reproduces the problem:
# encoding: utf-8
def test(word)
returned = `echo #{word}`.chomp
puts "#{word} == #{returned}"
raise "Cannot roundtrip #{word}" unless word == returned
end
test "good"
test "bÃd"
puts "Success"
# win7, cmd.exe font set to Lucinda Console, chcp 65001
# good == good
# bÃd == bÃd
Is this a bug in Ruby, or do I need to encode the command string manually to a specific encoding, before it gets passed to the cmd.exe process?
Update: I want to make it clear that the problem is not with reading the output back into Ruby, its purely with sending the command to the shell. To demonstrate:
# encoding: utf-8
File.open("bbbÃd.txt", "w") do |f|
f.puts "nothing to see here"
end
filename = Dir.glob("bbb*.txt").first
command = "attrib #{filename}"
puts command.encoding
puts "#{filename} exists?: #{ File.exists?(filename) }"
system command
File.delete(filename)
#=>
# UTF-8
# bbbÃd.txt exists?: true
# File not found - bbbÃd.txt
You can see that the file gets created correctly, the File.exists? method confirms that Ruby can see it, but when I try to run the attrib command on it, its trying to use a different filename.
Try setting the environment variable LC_CTYPE like this:
LC_CTYPE=en_US.UTF-8
Set this globally in the command shell or inside your Ruby script:
ENV['LC_CTYPE']='en_US.UTF-8'
I had the same issue using drag-and-drop in windows.
When I dropped a file having unicode characters in it's name the unicode characters got replaced by question marks.
Tried everything with encoding, changing the drophandler etc.
The only thing that worked was creating a batch file with following contents.
ruby.exe -Eutf-8 C:\Users\user\myscript.rb %*
The batch file does receive the unicode characters correctly as you can see as you do an echo %* first followed by a pause
I needed to add the -Eutf-8 parameter to have the filename come in as UTF-8 in the script itself, having the following lines in my script were not enough
#encoding: UTF-8
Encoding.default_external = Encoding::UTF_8
Encoding.default_internal = Encoding::UTF_8
Hope this helps people with similar problems.

File.exist? not working when directory name has special characters

File.exist? in not working with directory name having special characters. for something like given below
path = "/home/cis/Desktop/'El%20POP%20que%20llevas%20dentro%20Vol.%202'/*.mp3"
it works fine but if it has letters like ñ its returns false.
Plz help with this.
Try the following:
Make sure you're running 1.9.2 or greater and put # encoding: UTF-8 at the top of your file (which must be in UTF-8 and your editor must support it).
If you're running MRI(i.e. not JRuby or other implementation) you can add environment variable RUBYOPT=-Ku instead of # encoding: UTF-8 to the top of each file.

Resources