I've written some custom shellcode that I want to encode using Metasploit's msfvenom. Back when msfencode was still working this is the way the command would have gone:
$ echo -ne “\x31…\x80” | sudo msfencode -a x86 -t c -e x86/jmp_call_additive
"pipe the shellcode to msfencode for architecture x86 with the output as a c array with the x86/jmp_call_additive encoder"
Now I want to do the same thing except with msfvenom, so I tried:
$ echo -ne "\x31...\x80" | sudo msfvenom -e x86/jmp_call_additive -a x86 -t c
But I get the following error message:
Attempting to read payload from STDIN...
You must select a platform for a custom payload
I thought that giving the -a flag was specifying the correct platform/architecture, I've also tried --platform in place of -a but I still get the same error message.
I'm running this on a on a virtual machine using Ubuntu 32 bit. Thanks for any help
$ echo -ne “\x31...x80" | sudo msfvenom -e x86/jmp_call_additive -a x86 -p - --platform linux -f c
“pipe the custom shellcode into msfvenom with the x86/jmp_call_additive encoder on x86 architecture with a custom payload on a linux platform with a c array output format"
Related
Whenever I run this code I always get the error "bash: screen: command not found"
The code's supposed to run spigot.jar, while also doing a few other things (not my code)
I really don't know what to try, but I do think I might have to be using linux (which would be a pain in the ass figuring I only have one computer)
screen -S powercraft/PRISON -p 0 -X stuff "`printf "stop\r"`" ;
screen -S root/PRISON -p 0 -X stuff "`printf "stop\r"`" ;
sleep 10 ;
pkill -f PRISON ;
cp -r /home/ALL/update/plugins-1.8/* plugins ;
cp -r auto/* plugins ;
sleep 1 ;
rm -rf auto/* ;
screen -d -m -S PRISON java -server -Xmx6G -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:MaxGCPauseMillis=30 -XX:+UseBiasedLocking -XX:+OptimizeStringConcat -XX:+UseFastAccessorMethods -XX:+AggressiveOpts -jar spigot.jar
Looking at your code, I can see the command pkill. pkill is a Linux command not available on Windows. You don't have to install linux on your pc. You can use a virtual machine or you can install linux subsystem on windows 10. Instructions on the later can be found here
Batch file on WS 2008R2 Enterprise SP1, calling plink.exe (v.0.70.0.0), executing in ubuntu 16.04.4, intermittent failure Unable to read from standard input: The handle is invalid.
echo -e "youGuessedIt\n" | sudo -S nginx -t -c /home/userNoOne/Documents/nginx.conf &> /home/userNoOne/Documents/nginxResult.txt
echo -e "youGuessedIt\n" | sudo -S tail /var/log/nginx/error.log.1 >> /home/userNoOne/Documents/nginxResult.txt
echo -e "youGuessedIt\n" | sudo -S tail /var/log/nginx/error.log >> /home/userNoOne/Documents/nginxResult.txt
There is a known bug with Plink.exe that documents this failure. I'm looking for the simplest work-around:
https://www.chiark.greenend.org.uk/~sgtatham/putty/wishlist/win-plink-stdin-handle-invalid.html
I used to consume messages from amqp-consume with this command below at debian 7, but I installed debian 8 I think the amqp-tools is different and it does not recognize my command.
I noticed some changes. My web interface change the port from 55672 to 15672.
amqp-consume -d -q queue.udrive.admin.uiscsi -s 10.0.1.251 -p 5672 -e "directExchangeUdrive" --vhost "/" -r "" --username=guest --password=guest /bin/bash remoteManageUiSCSI.sh
error: both --server and --url options specify server host
I think the command expects it:
amqp-consume
consuming command not specified
Usage: amqp-consume [-dxA?] [-u|--url=amqp://...] [-s|--server=hostname] [--port=port] [--vhost=vhost] [--username=username] [--password=password] [--ssl] [--cacert=cacert.pem] [--key=key.pem] [--cert=cert.pem] [-q|--queue=queue] [-e|--exchange=exchange] [-r|--routing-key=routing key] [-d|--declare] [-x|--exclusive] [-A|--no-ack] [-c|--count=limit] [-p|--prefetch-count=limit] [-?|--help] [--usage] [OPTIONS]... <command> <args>
I tried all kinds of things on amqp:// and it dodn't work.
I got the answer at other site https://qpid.apache.org/releases/qpid-0.30/programming/book/QpidJNDI.html but I still wonder to know why this answer was not at the "man amqp-consume" or rabbitmq web site....
The command works for me is:
amqp-consume -d -u amqp://test:test#ustorageprod/%2f -q queue.udrive.admin.uiscsi -e "directExchangeUdrive" -r "" /bin/bash remoteManageUiSCSI.sh
amqp-publish -u amqp://test:test#ustorageprod/%2f -r "queue.udrive.ustorage" -e "directExchangeUdrive" -b "$msg"
On my Fedora machine I sometimes need to find out certain components of the kernel name, e.g.
VERSION=3.18.9-200.fc21
VERSION_ARCH=3.18.9-200.fc21.x86_64
SHORT_VERSION=3.18
DIST_VERSION=fc21
EXTRAVERSION = -200.fc21.x86_64
I know uname -a/-r/-m but these give me not all the components I need.
Of course I can just disassemble uname -r e.g.
KERNEL_VERSION_ARCH=$(uname -r)
KERNEL_VERSION=$(uname -r | cut -d '.' -f 1-4)
KERNEL_SHORT_VERSION=$(uname -r | cut -d '.' -f 1-2)
KERNEL_DIST_VERSION=$(uname -r | cut -d '.' -f 4)
EXTRAVERSION="-$(uname -r | cut -d '-' -f 2)"
But this seems very cumbersome and not future-safe to me.
Question: is there an elegant way (i.e. more readable and distribution aware) to get all kernel version/name components I need?
Nice would be s.th. like
kernel-ver -f "%M.%m.%p-%e.%a"
3.19.4-200.fc21.x86_64
kernel-ver -f "%M.%m"
3.19
kernel-ver -f "%d"
fc21
Of course the uname -r part would need a bit sed/awk/grep magic. But there are some other options you can try:
cat /etc/os-release
cat /etc/lsb-release
Since it's fedora you can try: cat /etc/fedora-release
lsb_release -a is also worth a try.
cat /proc/version, but that nearly the same output as uname -a
In the files /etc/*-release the format is already VARIABLE=value, so you could source the file directly and access the variables later:
$ source /etc/os-release
$ echo $ID
fedora
To sum this up a command that should work on every system that combines the above ideas:
cat /etc/*_ver* /etc/*-rel* 2>/dev/null
I have a bash script as such:
GITUSER="mygituser"
DBUSER="mysitedbuser"
DB="mysitedb"
SITE="mysite.com"
REPO="/var/git/myproject.git" # on the server
dropdb -U $DBUSER $DB &&
echo "remote db dump (gzip)" &&
F=`ssh $GITUSER#$SITE $REPO/dumpdb-gzip.sh` &&
echo "copying remote dump to localhost" &&
scp $GITUSER#$SITE:"$F" . &&
echo "deleting remote file" &&
ssh $GITUSER#$SITE rm "$F" &&
echo "loading dump in local db" &&
createdb -U $DBUSER -E UTF8 -O $DBUSER $DB &&
psql -U postgres -c "ALTER SCHEMA public OWNER TO $DBUSER" $DB &&
F=`echo "$F" | sed 's/^\/tmp\///'` &&
zcat "$F" | psql -q -f - -U $DBUSER $DB >/dev/null &&
rm "$F"
But running in on Mac OS X (Lion) gives me this error:
$ ./fetch_server_db.sh
remote db dump (gzip)
copying remote dump to localhost
pg_dump_2011-10-25_09-20-50.db.gz 100% 1017KB 254.2KB/s 00:04
deleting remote file
loading dump in local db
ALTER SCHEMA
./fetch_server_db.sh: line 24: 25878 Broken pipe: 13 zcat "$F"
25879 Segmentation fault: 11 | psql -q -f - -U $DBUSER $DB > /dev/null
I do not have such an error on Snow Leopard and this script continues to work perfectly fine on my arch linux machine. This script is failing with segmentation fault only after I upgraded to Lion.
Any idea what could be the problem? If no immediate answer is obvious, pointing me in the right direction to debug this script or to locate the source of the problem on Mac OS X Lion will do just fine! :-)
UPDATE
I have further isolated this problem to possibly blame it on postgresql 9.0.5. Specifically, when the line:
zcat "$F" | psql -q -f - -U $DBUSER $DB >/dev/null
is being executed (I ran the commands manually one by one in terminal), I get a "Segmentation fault: 11" error from postgresql, like this:
zcat "$F" | psql -q -f - -U mysitedbuser mysitedb >/dev/null
psql:-:32: ERROR: relation "acl_dummy" already exists
psql:-:46: ERROR: relation "acl_dummy_id_seq" already exists
Segmentation fault: 11
And this is the psql version I am using on my Lion:
$ psql --version
psql (PostgreSQL) 9.0.5
contains support for command-line editing
$ which psql
/opt/local/lib/postgresql90/bin/psql
$ psql -U postgres
psql (9.0.5)
Type "help" for help.
postgres=#
Any suggestions what else I can do?
The problem occurs for Xcode 4.2 compiled postgresql packages (screws up both postgresql90 and postgresql91). (see https://trac.macports.org/ticket/30090)
The solution is to write your own Portfile in ~/ports/databases/postgresql90/Portfile by appending somewhere to line 9:
revision: 1
and appending somewhere to line 40:
if {${configure.compiler} == "clang"} {
configure.compiler llvm-gcc-4.2
}
Then, copy over the entire "files" subdirectory from /opt/local/var/macports/sources/rsync.macports.org/release/tarballs/ports/databases/postgresql90/files
Make sure that in /opt/local/etc/macports/sources.conf, add in
file:///Users/whateveryourusernameis/ports
before the url pointing to
rsync://rsync.macports.org/release/tarballs/ports.tar [default]
Then do a portindex in ~/ports or do a sudo port -v selfupdate.
And finally uninstall the previous clang compiled postgresql90 package, clean it:
sudo port -v uninstall postgresql90 postgresql90-server
sudo port clean postgresql90 postgresql90-server
and then reinstall with:
sudo port -v install postgresql90 postgresql90-server
During this reinstallation step, you should notice in the stdout that your postgresql90 packages are now being compiled by llvm-gcc-4.2.
As a general note for compilation of packages via MacPorts, we can choose which compiler to use for a specific package (port) by using the recommendations here - https://trac.macports.org/wiki/PortfileRecipes#compiler
Actually, postgres90#9.0.6 already include this amendment.
You can upgrade just by executing:
sudo port -v uninstall postgresql90
sudo port -v install postgresql90
The same happens with postgresql91