Basically the title. I'm working on upgrading our existing haproxy from 1.5 to the latest version. Because of that, I'm setting up a test case to ensure our old setup can work on it. However, when I try to run it, I get the following error:
[NOTICE] (28948) : haproxy version is 2.4.1-1ce7d49
[NOTICE] (28948) : path to executable is /home/user/test/usr/local/sbin/haproxy
[ALERT] (28948) : parsing [test.cfg:22]: Missing LF on last line, file might have been truncated at position 68.
[ALERT] (28948) : Error(s) found in configuration file : test.cfg
[ALERT] (28948) : Fatal errors found in configuration.
I've tried looking it up, but I cannot find anything on the error. I've already checked my config file, and it is using the correct Unix format. Also, my test config works for the older version of HAProxy.
global
stats timeout 30s
user root
group root
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http_front
bind *:9090
default_backend http_back
backend http_back
balance roundrobin
server test.server.com 127.0.0.1:8000
In addition, I did the following to install haproxy:
tar xvf haproxy-2.4.1.tar.gz
cd haproxy-2.4.1
#vi to Makefile and set PREFIX to PREFIX = /home/user/test/usr/local
make TARGET=linux-glibc
make install
Is there anything that sticks out regarding my config file? Or did I miss something in the installation process?
Most likely you truncated your config file somehow. It looks ok.
I wasn't testing it on 2.4, but i found reference to it: https://www.mail-archive.com/haproxy#formilux.org/msg37698.html and was able to reproduce it on 2.2 (it was warning in 2.2, became error in 2.3 as described by haproxy message) with this simple config:
defaults
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http_front
bind *:80
mode http
http-request deny
This config is valid:
# haproxy -c -f test.conf
Configuration file is valid
Now i will truncate it by one byte to reproduce it:
# wc -c test.conf
149 test.conf
# dd if=test.conf of=test2.conf bs=1 count=148
148+0 records in
148+0 records out
148 bytes copied, 0.00267937 s, 55.2 kB/s
# hexdump -C test.conf
00000000 64 65 66 61 75 6c 74 73 0a 20 20 20 74 69 6d 65 |defaults. time|
00000010 6f 75 74 20 63 6f 6e 6e 65 63 74 20 35 30 30 30 |out connect 5000|
00000020 0a 20 20 20 74 69 6d 65 6f 75 74 20 63 6c 69 65 |. timeout clie|
00000030 6e 74 20 35 30 30 30 30 0a 20 20 20 74 69 6d 65 |nt 50000. time|
00000040 6f 75 74 20 73 65 72 76 65 72 20 35 30 30 30 30 |out server 50000|
00000050 0a 0a 66 72 6f 6e 74 65 6e 64 20 68 74 74 70 5f |..frontend http_|
00000060 66 72 6f 6e 74 0a 20 20 20 62 69 6e 64 20 2a 3a |front. bind *:|
00000070 38 30 0a 20 20 20 6d 6f 64 65 20 68 74 74 70 0a |80. mode http.|
00000080 20 20 20 68 74 74 70 2d 72 65 71 75 65 73 74 20 | http-request |
00000090 64 65 6e 79 0a |deny.|
00000095
# hexdump -C test2.conf
00000000 64 65 66 61 75 6c 74 73 0a 20 20 20 74 69 6d 65 |defaults. time|
00000010 6f 75 74 20 63 6f 6e 6e 65 63 74 20 35 30 30 30 |out connect 5000|
00000020 0a 20 20 20 74 69 6d 65 6f 75 74 20 63 6c 69 65 |. timeout clie|
00000030 6e 74 20 35 30 30 30 30 0a 20 20 20 74 69 6d 65 |nt 50000. time|
00000040 6f 75 74 20 73 65 72 76 65 72 20 35 30 30 30 30 |out server 50000|
00000050 0a 0a 66 72 6f 6e 74 65 6e 64 20 68 74 74 70 5f |..frontend http_|
00000060 66 72 6f 6e 74 0a 20 20 20 62 69 6e 64 20 2a 3a |front. bind *:|
00000070 38 30 0a 20 20 20 6d 6f 64 65 20 68 74 74 70 0a |80. mode http.|
00000080 20 20 20 68 74 74 70 2d 72 65 71 75 65 73 74 20 | http-request |
00000090 64 65 6e 79 |deny|
00000094
# haproxy -c -f test2.conf
[WARNING] 193/184514 (10725) : parsing [test2.conf:9]: Missing LF on last line, file might have been truncated at position 21. This will become a hard error in HAProxy 2.3.
Warnings were found.
Configuration file is valid
Note the missing 0a at the end. Check your config with hexdump -C
I had the same issue and I tried using Editor. but, it didn't work.
I was able to solve this issue by adding a new line.
I used the echo command and it worked.
echo "" >> /etc/hapee-2.2/hapee-lb.cfg
I had the exact same issue using Rancher when deploying Bitnami HAproxy. I quickly found that the constructor was addinng |- block chomping indicator removing the '0a' at the end. I had to edit the file in yaml format making sure to remove this indicator.
example:
config: |-
defalts
timeout.....
frontend http
bind *:8080
mode http
timeout client 10s
use_backend all
backend all
mode http
server s0 nodeapp0:3000
server s1 nodeapp1:3001
NOTE: Use some newlines (using enter) in your config file. here is my simple cfg file, I used enter keys to deal with the error.
Working Solution:
I faced this issue when running HAproxy on Windows Docker.
After turning on Show All Characters in notepad ++, I could see that the last line was missing new line control character. (On windows both CRLF and LF works fine)
Based on other answers in this thread, proceeded to add a new line by simply hitting enter key after the last character in the config file.
But noticed same error.
parsing [/usr/local/etc/haproxy/haproxy.cfg:28]: Missing LF on last line, file might have been truncated at position 3. Missing LF on last line, file might have been truncated at position 3.
After some trial and error, I got it to work with below changes.
Ensure that the last character of the config file is either CRLF or LF. If there are additional spaces, then remove those (in my case notepad++ added two spaces for auto indentation which was causing the issue even after adding a new line).
Related
I'm trying to load log text files from a ftp server to elastic .
The log files look like this :
0:0:21: Processing events from events
0:0:21: Processing croned build types from q_type
0:0:21: Process croned releases from trls
0:0:22: Processing croned regression list from regression
0:0:22: Processing commit loop
in data provenance (hex view , because other views not showing anything)i see the data like this :
0x00000090 66 69 65 6C 64 3A 20 52 4E 20 53 74 61 74 75 73 field: RN Status
0x000000A0 2E 20 4F 62 6A 65 63 74 20 72 65 66 65 72 65 6E . Object referen
0x000000B0 63 65 20 6E 6F 74 20 73 65 74 20 74 6F 20 61 6E ce not set to an
0x000000C0 20 69 6E 73 74 61 6E 63 65 20 6F 66 20 61 6E 20 instance of an
0x000000D0 6F 62 6A 65 63 74 2E 0D 0A 30 3A 30 3A 31 34 3A object...0:0:14:
0x000000E0 20 43 61 6E 27 74 20 72 65 61 64 20 69 73 73 75 Can't read issu
0x000000F0 65 3A 20 41 49 2D 32 34 37 20 63 75 73 74 6F 6D e: AI-247 custom
0x00000100 20 66 69 65 6C 64 3A 20 52 4E 20 53 65 63 74 69 field: RN Secti
0x00000110 6F 6E 2E 20 4F 62 6A 65 63 74 20 72 65 66 65 72 on. Object refer
0x00000120 65 6E 63 65 20 6E 6F 74 20 73 65 74 20 74 6F 20 ence not set to
0x00000130 61 6E 20 69 6E 73 74 61 6E 63 65 20 6F 66 20 61 an instance of a
0x00000140 6E 20 6F 62 6A 65 63 74 2E 0D 0A 30 3A 30 3A 31 n object...0:0:1
0x00000150 34 3A 20 43 61 6E 27 74 20 72 65 61 64 20 69 73 4: Can't read is
0x00000160 73 75 65 3A 20 41 49 2D 32 34 37 20 63 75 73 74 sue: AI-247 cust
0x00000170 6F 6D 20 66 69 65 6C 64 3A 20 52 4E 20 44 6F 63 om field: RN Doc
0x00000180 20 69 6E 20 56 65 72 2E 20 4F 62 6A 65 63 74 20 in Ver. Object
0x00000190 72 65 66 65 72 65 6E 63 65 20 6E 6F 74 20 73 65 reference not se
0x000001A0 74 20 74 6F 20 61 6E 20 69 6E 73 74 61 6E 63 65 t to an instance
0x000001B0 20 6F 66 20 61 6E 20 6F 62 6A 65 63 74 2E 0D 0A of an object...
0x000001C0 30 3A 30 3A 31 34 3A 20 43 61 6E 27 74 20 72 65 0:0:14: Can't re
0x000001D0 61 64 20 69 73 73 75 65 3A 20 41 49 2D 32 34 37 ad issue: AI-247
I can get the file with "getftp" processor, but how do I convert it to json so I can send it to Elastic ?
I am new to nifi hope im not missing something basic, any help will be appreciated.
Thanks
You can use the ConvertRecord processor with a CSVReader for the input (configure to use : as the delimiter) and a JsonRecordSetWriter for the output.
NiFi can automatically infer the schema, but as it doesn't appear you have a header line for the incoming data, this will probably not be helpful. In that case, you can use the Schema Registry to hold two schemas -- one for the incoming log lines, indicating what each field should be called and the data type, and one for the JSON output. Bryan Bende has written a great article about this process.
I'm trying to run my shell script on Linux (Ubuntu).
It's running correctly on MacOS, but on Ubuntu it doesn't.
#!/usr/bin/env bash
while true
do
node Client/request -t 10.9.2.4 -p 4400 --flood
done
Ubuntu output this error for running: sh myScript.sh:
Syntax error: end of file unexpected (expecting "do")
Why is there any difference between them, since both of them are running by Bash? How can I avoid future errors caused by their differences?
I tried cat yourscript.sh | tr -d '\r' >> yournewscript.sh as related question was suggested to do, and also while [ true ].
The command hexdump -C util/runner.sh result is:
00000000 23 21 2f 75 73 72 2f 62 69 6e 2f 65 6e 76 20 62 |#!/usr/bin/env b|
00000010 61 73 68 0d 0a 0d 0a 77 68 69 6c 65 20 5b 20 74 |ash....while [ t|
00000020 72 75 65 20 5d 0d 0a 64 6f 0d 0a 20 20 20 6e 6f |rue ]..do.. no|
00000030 64 65 20 43 6c 69 65 6e 74 2f 72 65 71 75 65 73 |de Client/reques|
00000040 74 20 2d 74 20 31 39 32 2e 31 36 38 2e 30 2e 34 |t -t 192.168.0.4|
00000050 31 20 2d 70 20 34 34 30 30 20 2d 2d 66 6c 6f 6f |1 -p 4400 --floo|
00000060 64 0d 0a 64 6f 6e 65 0d 0a |d..done..|
00000069
The shebang #! line at the top of your file tells that this is a bash script. But then you run your script with sh myScript.sh, therefore using the sh shell.
The sh shell is not the same as the bash shell in Ubuntu, as explained here.
To avoid this problem in the future, you should call shell scripts using the shebang line. And also make sure to prefer bash over sh, because the bash shell is more convenient and standardized (IMHO). In order for the script to be directly callable, you have to set the executable flag, like this:
chmod +x yournewscript.sh
This has to be done only once (it's not necessary to do this on every call.)
Then you can just call the script directly:
./yournewscript.sh
and it will be interpreted by whatever command is present in the first line of the script.
[Update]
I was able to bring up the JPOS client and server simulator on the same box using this link : http://jpos.org/blog/2013/07/setting-up-the-client-simulator/( Please note the setup is pretty similar to one described in the link for running a server simulator too).
What i did next was to basically try to see the tcpdump ( also using wireshark). But what i see is not what i expected. Here's what i see ( Please note the data part)
Data (325 bytes)
0000 3c 69 73 6f 6d 73 67 3e 0a 20 20 3c 21 2d 2d 20 <isomsg>. <!--
0010 6f 72 67 2e 6a 70 6f 73 2e 69 73 6f 2e 70 61 63 org.jpos.iso.pac
0020 6b 61 67 65 72 2e 58 4d 4c 50 61 63 6b 61 67 65 kager.XMLPackage
0030 72 20 2d 2d 3e 0a 20 20 3c 66 69 65 6c 64 20 69 r -->. <field i
0040 64 3d 22 30 22 20 76 61 6c 75 65 3d 22 31 38 30 d="0" value="180
0050 30 22 2f 3e 0a 20 20 3c 66 69 65 6c 64 20 69 64 0"/>. <field id
0060 3d 22 37 22 20 76 61 6c 75 65 3d 22 30 37 32 30 ="7" value="0720
0070 30 30 33 36 33 39 22 2f 3e 0a 20 20 3c 66 69 65 003639"/>. <fie
0080 6c 64 20 69 64 3d 22 31 31 22 20 76 61 6c 75 65 ld id="11" value
0090 3d 22 37 39 39 38 31 33 22 2f 3e 0a 20 20 3c 66 ="799813"/>. <f
00a0 69 65 6c 64 20 69 64 3d 22 31 32 22 20 76 61 6c ield id="12" val
00b0 75 65 3d 22 37 39 39 38 30 35 22 2f 3e 0a 20 20 ue="799805"/>.
00c0 3c 66 69 65 6c 64 20 69 64 3d 22 36 33 22 20 76 <field id="63" v
00d0 61 6c 75 65 3d 22 4d 6f 6e 20 4a 75 6c 20 32 30 alue="Mon Jul 20
00e0 20 30 30 3a 33 36 3a 33 39 20 50 44 54 20 32 30 00:36:39 PDT 20
00f0 31 35 22 2f 3e 0a 20 20 3c 69 73 6f 6d 73 67 20 15"/>. <isomsg
0100 69 64 3d 22 31 32 30 22 3e 0a 20 20 20 20 3c 66 id="120">. <f
0110 69 65 6c 64 20 69 64 3d 22 30 22 20 76 61 6c 75 ield id="0" valu
0120 65 3d 22 32 39 31 31 30 30 30 31 22 2f 3e 0a 20 e="29110001"/>.
0130 20 3c 2f 69 73 6f 6d 73 67 3e 0a 3c 2f 69 73 6f </isomsg>.</iso
0140 6d 73 67 3e 0a msg>.
Data: 3c69736f6d73673e0a20203c212d2d206f72672e6a706f73...
[Length: 325]
If you look at the data, it looks like the XML ISO Msg. I was expecting something like the HEX representation of ISO 8583 where the first bytes are the MTI and etc etc..
After looking at the client simulator file, i realized that its a XML Channel and packager. I looked at the following channel & packager link here jpos.org/doc/javadoc/org/jpos/iso/packager/package-summary.html jpos.org/doc/javadoc/org/jpos/iso/channel/package-summary.html
After changing the packager to PostChannel and PostPackager, i still see the problems on my client and i see it times out. Was wondering if there was a way to see the actual raw data via tcpdump/wireshark. The most close is the Postilion which has data length prepended to the raw data.
After playing with the PostChannel and PostPackager, i was able to get it running and could see the message. The things i needed to do was basically change both the server simulator and client simulator configurations to use the desired Channel and Packager.
This is what i changed in both the server and client simulator
Server Simulator : Change the file src/dist/deploy/05_serversimulator.xml to use the desired channel and packager
<channel class="org.jpos.iso.channel.PostChannel" logger="Q2"
packager="org.jpos.iso.packager.PostPackager">
Client Simulator : Change the file ./src/dist/deploy/10_clientsimulator_channel.xml to use the desired channel and packager
<channel class="org.jpos.iso.channel.PostChannel" logger="Q2"
packager="org.jpos.iso.packager.PostPackager">
And then fire up the client and server simulators.
Channels assist you in connecting to the other entity and add headers, length headers , tpdu etc based on the implementation of the channel used.
PostChannel that you use here adds a 2 byte length header containing the size of the message. This assists the receiver in collecting the right amount of bytes from the tcp stream.
Packagers assist you in packing fields in the message, examples are fixed field, length prepended variables fields and what encoding these should have (hex,bcd, ascii).
The client server sims out of the box use xml for understanding the concepts.
I am trying to connect to the reference websocket echo server "manually", in order to learn how the protocol works (I am using socat for that). However, the server invariably closes the connection without providing an answer. Any idea why?
Here is what I do:
socat - TCP:echo.websocket.org:80
Then, I paste the following text in the terminal:
GET /?encoding=text HTTP/1.1
Origin: http://www.websocket.org
Connection: Upgrade
Host: echo.websocket.org
Sec-WebSocket-Key: P7Kp2hTLNRPFMGLxPV47eQ==
Upgrade: websocket
Sec-WebSocket-Version: 13
I sniffed the parameters of the connection with the developer tools, in firefox, on the same machine, where this works flawlessly: therefore, I would assume they are correct. However after that, the server closes the connection immediately, without providing an answer. Why? How can I implement the protocol "manually"?
I would like type test in my terminal and get the server to reply with what I typed (It works in a web browser).
I think you want to modify the socket stream to translate \n (line feed) to CRLF (Carriage return & line feed). Doing info socat produces detailed information which includes this modifier:
crnl Converts the default line termination character NL ('\n', 0x0a)
to/from CRNL ("\r\n", 0x0d0a) when writing/reading on this chan-
nel (example). Note: socat simply strips all CR characters.
So I think you should be able to do this:
socat - TCP:echo.websocket.org:80,crnl
I'd like to add that my WebSocket tool websocat can help in debugging the WebSocket protocol, especially when combined with socat:
$ websocat - ws-c:sh-c:"socat -v -x - tcp:echo.websocket.org:80" --ws-c-uri ws://echo.websocket.org
> 2018/07/03 16:30:06.021658 length=157 from=0 to=156
47 45 54 20 2f 20 48 54 54 50 2f 31 2e 31 0d 0a GET / HTTP/1.1..
48 6f 73 74 3a 20 65 63 68 6f 2e 77 65 62 73 6f Host: echo.webso
63 6b 65 74 2e 6f 72 67 0d 0a cket.org..
43 6f 6e 6e 65 63 74 69 6f 6e 3a 20 55 70 67 72 Connection: Upgr
61 64 65 0d 0a ade..
55 70 67 72 61 64 65 3a 20 77 65 62 73 6f 63 6b Upgrade: websock
65 74 0d 0a et..
53 65 63 2d 57 65 62 53 6f 63 6b 65 74 2d 56 65 Sec-WebSocket-Ve
72 73 69 6f 6e 3a 20 31 33 0d 0a rsion: 13..
53 65 63 2d 57 65 62 53 6f 63 6b 65 74 2d 4b 65 Sec-WebSocket-Ke
79 3a 20 59 76 36 32 44 31 57 6d 7a 79 79 31 65 y: Yv62D1Wmzyy1e
69 6d 62 47 6d 68 69 61 67 3d 3d 0d 0a imbGmhiag==..
0d 0a ..
--
< 2018/07/03 16:30:06.164057 length=201 from=0 to=200
48 54 54 50 2f 31 2e 31 20 31 30 31 20 57 65 62 HTTP/1.1 101 Web
20 53 6f 63 6b 65 74 20 50 72 6f 74 6f 63 6f 6c Socket Protocol
20 48 61 6e 64 73 68 61 6b 65 0d 0a Handshake..
43 6f 6e 6e 65 63 74 69 6f 6e 3a 20 55 70 67 72 Connection: Upgr
61 64 65 0d 0a ade..
44 61 74 65 3a 20 54 75 65 2c 20 30 33 20 4a 75 Date: Tue, 03 Ju
6c 20 32 30 31 38 20 31 33 3a 31 35 3a 30 30 20 l 2018 13:15:00
47 4d 54 0d 0a GMT..
53 65 63 2d 57 65 62 53 6f 63 6b 65 74 2d 41 63 Sec-WebSocket-Ac
63 65 70 74 3a 20 55 56 6a 32 74 35 50 43 7a 62 cept: UVj2t5PCzb
58 49 32 52 4e 51 75 70 2f 71 48 31 63 5a 44 6e XI2RNQup/qH1cZDn
38 3d 0d 0a 8=..
53 65 72 76 65 72 3a 20 4b 61 61 7a 69 6e 67 20 Server: Kaazing
47 61 74 65 77 61 79 0d 0a Gateway..
55 70 67 72 61 64 65 3a 20 77 65 62 73 6f 63 6b Upgrade: websock
65 74 0d 0a et..
0d 0a ..
--
ABCDEF
> 2018/07/03 16:30:12.707919 length=13 from=157 to=169
82 87 40 57 f5 88 01 15 b6 cc 05 11 ff ..#W.........
--
< 2018/07/03 16:30:12.848398 length=9 from=201 to=209
82 07 41 42 43 44 45 46 0a ..ABCDEF.
--
ABCDEF
> 2018/07/03 16:30:14.528333 length=6 from=170 to=175
88 80 18 ec 05 a8 ......
--
< 2018/07/03 16:30:14.671629 length=2 from=210 to=211
88 00 ..
--
In case of failures with manually driven socat -v -x - TCP:echo.websocket.org:80,crnl (mentioned in the other answer), you can compare it with WebSocat-driven socat like in session depicted above.
Reverse (server) example with socat debug dump:
socat -v -x tcp-l:1234,fork,reuseaddr exec:'websocat -t ws-u\:stdio\: mirror\:'
Alternatively, here is a way to connect and read the stream from a wss secure websocket stream from the command line using solely core php.
php -r '$sock=stream_socket_client("tls://echo.websocket.org:443",$e,$n,30,STREAM_CLIENT_CONNECT,stream_context_create(null));if(!$sock){echo"[$n]$e".PHP_EOL;}else{fwrite($sock,"GET / HTTP/1.1\r\nHost: echo.websocket.org\r\nAccept: */*\r\nConnection: Upgrade\r\nUpgrade: websocket\r\nSec-WebSocket-Version: 13\r\nSec-WebSocket-Key: ".rand(0,999)."\r\n\r\n");while(!feof($sock)){var_dump(fgets($sock,2048));}}'
Other similar example, pulling from another wss server: (Do not get rekt)
php -r '$sock=stream_socket_client("tls://stream.binance.com:9443",$e,$n,30,STREAM_CLIENT_CONNECT,stream_context_create(null));if(!$sock){echo"[$n]$e".PHP_EOL;}else{fwrite($sock,"GET /stream?streams=btcusdt#kline_1m HTTP/1.1\r\nHost: stream.binance.com:9443\r\nAccept: */*\r\nConnection: Upgrade\r\nUpgrade: websocket\r\nSec-WebSocket-Version: 13\r\nSec-WebSocket-Key: ".rand(0,999)."\r\n\r\n");while(!feof($sock)){var_dump(explode(",",fgets($sock,512)));}}'
i hope you can give me an idea about what's going wrong.
The Szenario:
I run gitweb (CGI) with a script in fastcgi mode:
#!/bin/sh
export FCGI_SOCKET_PATH=127.0.0.1:7001
su git -c "/var/www/vh_[vhost]/htdocs/gitweb.cgi --fastcgi &"
Then i use nginx to serve that content:
...
fastcgi_pass 127.0.0.1:7001;
...
Everything works as expected, but here's the problem:
$ wget "http://git.[host].de/?p=[repo].git;a=summary" -O /tmp/test.txt && file --mime-encoding /tmp/test.txt
> /tmp/test.txt: iso-8859-1
$ su git -c "./gitweb.cgi \"?p=[repo].git;a=summary\" > ./test" && file --mime-encoding ./test
> ./test: utf-8
Which obviously means that fast-cgi output is utf8 while content served by nginx is iso-8859-1.
FireBugs Response Header:
Server nginx
Date Fri, 02 Sep 2011 14:14:08 GMT
Content-Type application/xhtml+xml; charset=utf-8
Transfer-Encoding chunked
Connection close
It looks like the transfer using the socket leads to an encoding problem.
I've tested a lot but can't figure out how to solve this.
although you aren't using PHP, I found the fix for my issue but wrapping the pieces that were being exposed as ISO-8859-1 with: utf8_encode(): http://php.net/manual/en/function.utf8-encode.php
If your CGI is in PERL, maybe http://perldoc.perl.org/utf8.html will solve your problem. It solved mine ... Z�rich
Another option could be to add the following to the http { } statement in your nginx.conf:
charset utf-8;
-sd
I can make it works by using fcgiwrap.
I though some environment variables where different between the two methods, so I added the following code to the gitweb.cgi dispatch() sub:
open my $tmplogfile, ">", "/tmp/gitweb-env.txt";
foreach my $varkey (sort keys %ENV) {
print $tmplogfile "$varkey = $ENV{$varkey}\n";
}
close $tmplogfile;
but the environment were the same.
Something may be done by fcgiwrap, I do not yet found what.
Here are the commands I use and the differences I found using tcpdump on the fcgi socket:
# gitweb spawned by fcgiwrap outputs utf-8
/usr/bin/spawn-fcgi -d /usr/share/gitweb -a 127.0.0.1 -p 3000 -u www-data -g gitolite -P /run/gitweb/gitweb.cgi.pid -- /usr/sbin/fcgiwrap
# Require the following nginx gitweb_fastcgi_params
# fastcgi_param QUERY_STRING $query_string;
# fastcgi_param REQUEST_METHOD $request_method;
# fastcgi_param SCRIPT_NAME $fastcgi_script_name;
# fastcgi_param DOCUMENT_ROOT $document_root;
# With the following nginx configuration
# upstream gitweb {
# server 127.0.0.1:3000;
# }
#
# server {
# listen 80;
#
# server_name git.example.net;
#
# root /usr/share/gitweb;
#
# access_log /var/log/nginx/gitweb-access.log;
# error_log /var/log/nginx/gitweb-errors.log;
#
# location / {
# alias /usr/share/gitweb/gitweb.cgi;
# include gitweb_fastcgi_params;
# fastcgi_pass gitweb;
# }
#
# location /static {
# alias /usr/share/gitweb/static;
# expires 31d;
# }
# }
# STDOUT captured on lo
# Begin of the FCGI answer
# 00000000 01 06 00 01 1f f8 00 00 53 74 61 74 75 73 3a 20 ........ Status:
# 00000010 32 30 30 20 4f 4b 0d 0a 43 6f 6e 74 65 6e 74 2d 200 OK.. Content-
# 00000020 54 79 70 65 3a 20 61 70 70 6c 69 63 61 74 69 6f Type: ap plicatio
# 00000030 6e 2f 78 68 74 6d 6c 2b 78 6d 6c 3b 20 63 68 61 n/xhtml+ xml; cha
# 00000040 72 73 65 74 3d 75 74 66 2d 38 0d 0a 0d 0a 3c 3f rset=utf -8....<?
# 00000050 78 6d 6c 20 76 65 72 73 69 6f 6e 3d 22 31 2e 30 xml vers ion="1.0
# [...]
#
# "Guido Günther" as UTF-8
# 00000FA0 6c 65 3d 22 53 65 61 72 63 68 20 66 6f 72 20 63 le="Sear ch for c
# 00000FB0 6f 6d 6d 69 74 73 20 61 75 74 68 6f 72 65 64 20 ommits a uthored
# 00000FC0 62 79 20 47 75 69 64 6f 20 47 c3 bc 6e 74 68 65 by Guido G..nthe
# 00000FD0 72 22 20 63 6c 61 73 73 3d 22 6c 69 73 74 22 20 r" class ="list"
Before, gitweb --fastcgi was directly spawned by spawn-fcgi:
# gitweb spawned by spawn-fcgi outputs iso-8859-1
/usr/bin/spawn-fcgi -d /usr/share/gitweb -a 127.0.0.1 -p 3000 -u www-data -g gitolite -P /run/gitweb/gitweb.cgi.pid -- /usr/share/gitweb/gitweb.cgi --fastcgi
# STDOUT captured on lo
# Begin of the FCGI answer with "00 46 02" in place of "1f f8 00" for utf-8 output
# 00000000 01 06 00 01 00 46 02 00 53 74 61 74 75 73 3a 20 .....F.. Status:
# 00000010 32 30 30 20 4f 4b 0d 0a 43 6f 6e 74 65 6e 74 2d 200 OK.. Content-
# 00000020 54 79 70 65 3a 20 61 70 70 6c 69 63 61 74 69 6f Type: ap plicatio
# 00000030 6e 2f 78 68 74 6d 6c 2b 78 6d 6c 3b 20 63 68 61 n/xhtml+ xml; cha
# 00000040 72 73 65 74 3d 75 74 66 2d 38 0d 0a 0d 0a 00 00 rset=utf -8......
# 00000050 01 06 00 01 02 88 00 00 3c 3f 78 6d 6c 20 76 65 ........ <?xml ve
# 00000060 72 73 69 6f 6e 3d 22 31 2e 30 22 20 65 6e 63 6f rsion="1 .0" enco
# 00000070 64 69 6e 67 3d 22 75 74 66 2d 38 22 3f 3e 0a 3c ding="ut f-8"?>.<
# [...]
#
# "Guido Günther" as ISO-8859-1
# 00001128 74 6c 65 3d 22 53 65 61 72 63 68 20 66 6f 72 20 tle="Sea rch for
# 00001138 63 6f 6d 6d 69 74 73 20 61 75 74 68 6f 72 65 64 commits authored
# 00001148 20 62 79 20 47 75 69 64 6f 20 47 fc 6e 74 68 65 by Guid o G.nthe