How to export FTP-data from several packages - ftp

I've been trying for hours to solve this. Googling like a maniac aswell. How do I export the FTP-data from a bunch of packages? Like when you export HTTP-packages in Wireshark, in just a few clicks you can export all packages as a single one to a file and then just open the HTML page.
Lets say you downloaded a .zip file (through FTP) and you caught this with Wireshark. Now I want to export all those FTP-data packages containing the .zip file to a copy of the .zip file. How can I do that? I managed to get all hexdumps (I think that's what it is called) of the packages, and it looks like this:
0000 00 50 56 ca 11 d8 00 50 18 03 39 80 08 00 45 00 .PV....P..9...E.
0010 04 34 06 34 40 00 2d 06 d3 6f c1 e7 ec 2a c0 a8 .4.4#.-..o...*..
etc...
Maybe I can convert that somehow? Or is there some other way?

You can use Bro to extract files from FTP traffic (and other protocols as well). Simply run it as follows:
bro -r trace.pcap 'FTP::extract_file_types = /.*/'
The pattern controls the MIME type of the files to extract. Change -r <trace> to -i <interface> when sniffing on a network interface. Bro creates log files in the same directory it is being run. In addition to the basic logs, you'll now find files named
ftp-item_<SERVER-IP>:<SERVER-DATA-PORT>-<CLIENT-IP>:<CLIENT-PORT>.dat
which contain the payload of the FTP data.

Related

Is there a way to validate by content that the file I am uploading is a .msg file?

I need to validate that the file I am uploading is a .msg file. I want to do that by content. Because it is a Microsoft file, the header will be the same as .doc and .xls (D0 CF 11 E0 A1 B1 1A E1). The only way to differentiate between the Microsoft formats is by subheader.
I have currently tried to validate against the subheader :
[512 (0x200) byte offset]
52 00 6F 00 6F 00 74 00
20 00 45 00 6E 00 74 00
72 00 79 00)
It worked on sample files, but when I save an Outlook mail (.msg) and try to validate, it does not have that subheader (the one above). I currently have Outlook 2010. Does someone know why it does not contain the subheader? or what alternative should I use?
MSG file (just like the old DOC and XLS formats) is an OLE storage file. You can check if the "__properties_version1.0" stream exists - take a look an an MSG file with a viewer like SSView
The MSG file format is described in depth.

How to decode communication between terminal and chip on APDU?

I have one communication between terminal and chip on APDU, and I need to decode that communication.
It's something like this:
Terminal: 00 B6 02 00 06 00
Chip: 49 55 7B 2C 1F 30 57 35 63 7D 24 7B 60 21
Terminal:00 B5 03 0B 04 02 00
Chip:45 43 3C 3B 4A 31 51 35 53 4B 34 2C 30 21
From what I know, terminal is sending commands to smart card chip, and smart card chip is giving response.
So, I need to know what is their communication about. It has to do with EMV standards and APDU.
How can I decode it? What are the steps and rules?
The communication between chip and terminal is using APDUs. Command APDU and response APDU. Below will give you idea about the struct of messages. For detailed reading download the documents(those are called books in emv world) from here. Infact the below are copy paste from Book 3. Have a detailed look and come back if you need more information.
All data are in hex.
The command APDU has the below format.
[Class] [Instruction] [Parameter 1] [Parameter 2] [Length of command
Data] [Command]
[Length of maximum expected data response]
Response APDU has the format
[Data] [2 bytes status of APDU execution( See coding of Sw1 Sw2 below]
Coding of the Class Byte
The most significant nibble of the class byte indicates the type of command. 0' Inter-industry command, '8' Proprietary to this specification.
Instruction bytes define the funtions you wish to do. Coding of the
Instruction Byte is

Read and Write File Section of File Only (Without Streams)

Most high-level representations of files are as streams. C's fopen, ActiveX's Scripting.FileSystemObject and ADODB.Stream - in fact, anything built on top of C is very likely to use a stream representation to edit files.
However, when modifying large (~4MiB) fixed-structure binary files, it seems wasteful to read the whole file into memory and write almost exactly the same thing back to disk - this will almost certainly have a performance penalty attached. Looking at the majority of uncompressed filesystems, there seems to me to be no reason why a section of the file couldn't be read from and written to without touching the surrounding data. At a maximum, the block would have to be rewritten, but that's usually on the order of 4KiB; much less than the entire file for large files.
Example:
00 01 02 03
04 05 06 07
08 09 0A 0B
0C 0D 0E 0F
might become:
00 01 02 03
04 F0 F1 F2
F3 F4 F5 0B
0C 0D 0E 0F
A solution using existing ActiveX objects would be ideal, but any way of doing this without having to re-write the entire file would be good.
Okay, here's how to do the exercise in powershell (e.g. hello.ps1):
$path = "hello.txt"
$bw = New-Object System.IO.BinaryWriter([System.IO.File]::Open($path, [System.IO.FileMode]::Open, [System.IO.FileAccess]::ReadWrite, [System.IO.FileShare]::ReadWrite))
$bw.BaseStream.Seek(5, [System.IO.SeekOrigin]::Begin)
$bw.Write([byte] 0xF0)
$bw.Write([byte] 0xF1)
$bw.Write([byte] 0xF2)
$bw.Write([byte] 0xF3)
$bw.Write([byte] 0xF4)
$bw.Write([byte] 0xF5)
$bw.Close()
You can test it from command line:
powershell -file hello.ps1
Then, you can invoke this from your HTA as:
var wsh = new ActiveXObject("WScript.Shell");
wsh.Run("powershell -file hello.ps1");

Firefox breaks pdf after download

Our web application provides ability to download pdf.
When user clicks on download link we open the pdf in a new tab.
My firefox uses pdfjs as a pdf viewer and I can save pdf through it's interface.
Everything was fine in Firefox 19, but version 24 download file which looks like corrupted (it displays that file, but can't download it correctly).
I noticed that result size of file is a nearest power of 2, for example if my original pdf size is 97kb then after downloading it through Firefox's pdfjs its size becomes 128kb and my desktop pdf viewers (like acrobat) can't open it.
I tested it on the same version of our app.
update
Demo pdf file - everything is fine with downloading through linux google chrome viewer and linux firefox 21 (pdfjs), but the same problem with linux firefox 23.0.1
Is something wrong with pdfjs or with our server?
update #2
I looked at binary contents of broken and not-broken file:
$ git diff not-broken.dump broken.dump
diff --git a/not-broken.dump b/broken.dump
index 3621089..5de337c 100644
--- a/not-broken.dump
+++ b/broken.dump
## -336,5 +336,7 ##
000014f0 b8 d0 3d 76 85 f8 76 9d e6 50 74 df e7 a7 bd b0 |..=v..v..Pt.....|
00001500 00 f1 6e 05 63 0a 65 6e 64 73 74 72 65 61 6d 0a |..n.c.endstream.|
00001510 65 6e 64 6f 62 6a 0a 73 74 61 72 74 78 72 65 66 |endobj.startxref|
-00001520 0a 35 32 31 33 0a 25 25 45 4f 46 0a |.5213.%%EOF.|
-0000152c
+00001520 0a 35 32 31 33 0a 25 25 45 4f 46 0a 00 00 00 00 |.5213.%%EOF.....|
+00001530 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
+*
+00010000
What we have here is a genuine bug. I filed: https://github.com/mozilla/pdf.js/issues/3634
Since the data transfer does not specify a content-length, but uses chunked transfer encoding, pdf.js will use an initial buffer of 64kb that is doubled each time it would overflow. However, once the transfer is complete pdf.js will not shrink that buffer to the actual size, nor will it remember the actual size, so that upon download the whole over-sized buffer (still the initial 64kb in your example) will be transferred.
I don't think there is a real work-around, short of not using pdf.js at all (which is a user choice).

UTF-16 perl input output

I am writing a script that takes a UTF-16 encoded text file as input and outputs a UTF-16 encoded text file.
use open "encoding(UTF-16)";
open INPUT, "< input.txt"
or die "cannot open > input.txt: $!\n";
open(OUTPUT,"> output.txt");
while(<INPUT>) {
print OUTPUT "$_\n"
}
Let's just say that my program writes everything from input.txt into output.txt.
This WORKS perfectly fine in my cygwin environment, which is using "This is perl 5, version 14, subversion 2 (v5.14.2) built for cygwin-thread-multi-64int"
But in my Windows environment, which is using "This is perl 5, version 12, subversion 3 (v5.12.3) built for MSWin32-x64-multi-thread",
Every line in output.txt is pre-pended with crazy symbols except the first line.
For example:
<FIRST LINE OF TEXT>
਀    ㈀  ㄀Ⰰ ㈀Ⰰ 嘀愀 ㌀ 䌀栀椀愀 䐀⸀⸀⸀  儀甀愀渀最 䠀ഊ<SECOND LINE OF TEXT>
...
Can anyone give some insight on why it works on cygwin but not windows?
EDIT: After printing the encoded layers as suggested.
In Windows environment:
unix
crlf
encoding(UTF-16)
utf8
unix
crlf
encoding(UTF-16)
utf8
In Cygwin environment:
unix
perlio
encoding(UTF-16)
utf8
unix
perlio
encoding(UTF-16)
utf8
The only difference is between the perlio and crlf layer.
[ I was going to wait and give a thorough answer, but it's probably better if I give you a quick answer than nothing. ]
The problem is that crlf and the encoding layers are in the wrong order. Not your fault.
For example, say you do print "a\nb\nc\n"; using UTF-16le (since it's simpler and it's probably what you actually want). You'd end up with
61 00 0D 0A 00 62 00 0D 0A 00 63 00 0D 0A 00
instead of
61 00 0D 00 0A 00 62 00 0D 00 0A 00 63 00 0D 00 0A 00
I don't think you can get the right results with the open pragma or with binmode, but it can be done using open.
open(my $fh, '<:raw:encoding(UTF-16):crlf', $qfn)
You'll need to append a :utf8 with some older version, IIRC.
It works on cygwin because the crlf layer is only added on Windows. There you'd get
61 00 0A 00 62 00 0A 00 63 00 0A 00
You have a typo in your encoding. It should be use open ":encoding(UTF-16)" Note the colon. I don't know why it would work on Cygwin but not Windows, but could also be a 5.12 vs 5.14 thing. Perl seems to make up for it, but it could be what's causing your problem.
If that doesn't do it, check if the encoding is being applied to your filehandles.
print map { "$_\n" } PerlIO::get_layers(*INPUT);
print map { "$_\n" } PerlIO::get_layers(*OUTPUT);
Use lexical filehandles (ie. open my $fh, "<", $file). Glob filehandles are global and thus something else in your program might be interfering with them.
If all that checks out, if lexical filehandles are getting the encoding(UTF-16) applied, let us know and we can try something else.
UPDATE: This may provide your answer: "BOMed UTF files are not suitable for streaming models, and they must be slurped as binary files instead." Looks like you have to read the file in as binary and do the encoding as a string. This may have been a bug fixed in 5.14.
UPDATE 2: Yep, I can confirm this is a bug that was fixed in 5.14.

Resources