I'm trying to send text from a wordlist line-by-line into a discord channel via a discord webhook. The script below only sends the last line in the wordlist for some reason.
from discord_webhook import DiscordWebhook
with open('wordlist.txt','r') as lines:
for line in lines:
webhook = DiscordWebhook(url='webhookurlhere', content=lines.readline()
response = webhook.execute()
from discord_webhook import DiscordWebhook
file = open("wordlist.txt", 'r')
lines = file.readlines()
for line in lines:
webhook= DiscordWebhook(url='myurl', content= line)
response = webhook.exectute()
file.close()
it appears you were missing a closing parenthesis on your discordwebhook call. i also structured the reading of the file a little differently, just easier for reading for me. see if this works for you.
Related
I have a stream of data that I’m writing to a named pipe:
named_pipe = '/tmp/pipe' # Location of named pipe
File.mkfifo(named_pipe) # Create named pipe
File.open(named_pipe, 'w+') # Necessary to not get a broken pipe when ⌃C from another process later on
system('youtube-dl', '--newline', 'https://www.youtube.com/watch?v=aqz-KE-bpKQ', out: named_pipe) # Output download progress, one line at a time
Trouble is, while I can cat /tmp/pipe and get the information, I’m unable to read the file from another Ruby process. I’ve tried File.readlines, File.read with seeking, File.open then reading, and other stuff I no longer remember. Some of those hang, others error out.
How can I get the same result as with cat, in pure Ruby?
Note I don’t have to use system to send to the pipe (Open3 would be acceptable), but any solution requiring external dependencies is a no-go.
it looks like File.readlines/IO.readlines, File.read/IO.read need to load the whole temp file first so you don't see any be printed out.
try File#each/IO.foreach which process a file line by line and it does not require the whole file be loaded into memory
File.foreach("/tmp/pipe") { |line| p line }
# or
File.open('/tmp/pipe','r').each { |line| p line }
I'm trying to setup a .py plugin that will save decoded Protobuf responses to file, but whatever I do, the result is always file in byte format (not decoded). I have also tried to do the same by using "w" in Mitmproxy - although on screen I saw decoded data, in the file it was encoded again.
Any thoughts how to do it correctly?
Sample code for now:
import mitmproxy
def response(flow):
# if flow.request.pretty_url.endswith("some-url.com/endpoint"):
if flow.request.pretty_url.endswith("some-url.com/endpoint"):
f = open("test.log","ab")
with decoded(flow.response)
f.write(flow.request.content)
f.write(flow.response.content)
Eh, I'm not sure this helps, but what happens if you don't open the file in binary mode
f = open("test.log","a")
?
Hy,
some basic things that I found.
Try replacing
f.write(flow.request.content)
with
f.write(flow.request.text)
I read it on this website
https://discourse.mitmproxy.org/t/modifying-https-response-body-not-working/645/3
Please read and try this to get the requests and responses assembled.
MITM Proxy, getting entire request and response string
Best of luck with your project.
I was able to find the way to do that. Seems mitmdump or mitmproxy wasn't able to save raw decoded Protobuf, so I used:
mitmdump -s decode_script.py
with the following script to save the decoded data to a file:
import mitmproxy
import subprocess
import time
def response(flow):
if flow.request.pretty_url.endswith("HERE/IS/SOME/API/PATH"):
protobuffedResponse=flow.response.content
(out, err) = subprocess.Popen(['protoc', '--decode_raw'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate(protobuffedResponse)
outStr = str(out, 'utf-8')
outStr = outStr.replace('\\"', '"')
timestr = time.strftime("%Y%m%d-%H%M%S")
with open("decoded_messages/" + timestr + ".decode_raw.log","w") as f:
f.write(outStr)
I have been working on a project for a while now, and I just reached another big step! However, for some .txt files that my program creates, it will give me this message:
File was loaded in the wrong encoding: 'UTF-8'
Most of the .txt files are fine, but it gives me this error for others at the top (I can still read them). Here is my code:
from socket import *
import codecs
import subprocess
ipa = '192.168.1.' # These are the first 3 digits of the IP addresses that the program looks for.
def is_up(adr):
s = socket(AF_INET, SOCK_STREAM)
s.settimeout(0.01)
if not s.connect_ex((adr, 135)):
s.close()
return 1
else:
s.close()
def main():
for i in range(1, 256):
adr = ipa + str(i)
if is_up(adr):
with codecs.open("" + getfqdn(adr) + ".txt", "w+", 'utf-8-sig') as f:
subprocess.run('ipconfig | findstr /i "ipv4"', stdout=f, shell=True, check=True)
subprocess.run('wmic/node:'+adr+' product get name, version, vendor', stdout=f, shell=True, check=True)
main()
# Most code provided by Ashish Jain
Unfortunately I don't think I'm allowed to say exactly which files are giving me trouble, because I might be distributing information that someone can use for malicious intent.
Since your script only writes to files, there's no reason to open it in w+ mode, which enables reading. Opening the files in w mode should be enough.
Furthermore, the commands that your script runs must not be outputting in utf-8-sig-encoded text, and hence the error. In most cases outputting with default encoding by not specifying an encoding will suffice.
Lastly, you're missing a space between wmic and /node: in the second command you run.
more sysadmin (chef) than ruby guy, so this may be a five minute fix.
I am working on a task where i write a ruby script that pulls json data from multiple files, parses it, and writes the desired fields to a single .csv file. Basically pulling metadata about aws accounts and putting it in an accountant friendly format.
Got a lot of help from another stackoverflow on how to solve the problem for a single file, json.parse help.
My issue is that I am trying to pull the same data from multiple JSON files in an array. I can get it to loop through each file with the code below.
require 'csv'
require "json"
delim_file = CSV.open("delimited_test.csv", "w")
aws_account_list = %w(example example2)
aws_account_list.each do |account|
json_file = File.read(account.to_s + "_aws.json")
parsed_json = JSON.parse(json_file)
delim_file = CSV.open("delimited_test.csv", "w")
# This next line could be a problem if you ran this code multiple times
delim_file << ["EbsOptimized", "PrivateDnsName", "KeyName", "AvailabilityZone", "OwnerId"]
parsed_json['Reservations'].each do |inner_json|
inner_json['Instances'].each do |instance_json|
delim_file << [[instance_json['EbsOptimized'].to_s, instance_json['PrivateDnsName'], instance_json['KeyName'], instance_json['Placement']['AvailabilityZone'], inner_json['OwnerId']],[]]
end
delim_file.close
end
end
However, whenever I do it, it overwrites every time to the same single row in the .csv file. I have tried adding a \n string to the end of the array, converting the array to a string with hashes and doing a \n, but all that does is add a line to the same row that it overwrites.
How would I go about writing that it reads each json file, then appending each files metadata to a new row? This looks like a simple case of writing the right loop, but I can't figure it out.
You declared your file like this:
delim_file = CSV.open("delimited_test.csv", "w")
To fix your issue, all you have to do is change "w" to "a":
delim_file = CSV.open("delimited_test.csv", "a")
See the docs for IO#new for a description of the available file modes. In short, w creates an empty file at the filename, overwriting anyothers, and writes to that. a only creates the file if it doesn't exist, and appends otherwise. Because you have it currently at w, it'll overwrite it each time you run the script. With a, it'll append to what's already there.
You need to open file in append mode, use
delim_file = CSV.open("delimited_test.csv", "a")
'a' Write-only, starts at end of file if file exists, otherwise creates a new file for writing.
'a+' Read-write, starts at end of file if file exists, otherwise creates a new file for reading and writing'
I want my output to appear in my pyqt textedit not python shell after clicking a pushbutton. I am not familiar with subprocess or stdout stuff and not even sure if this will involve them. Need some help here. Here is part of my code:
self.textEdit = QtGui.QTextEdit(Dialog)
self.textEdit.setGeometry(QtCore.QRect(20, 200, 431, 241))
self.textEdit.setObjectName(_fromUtf8("textEdit"))
def readI2C(self):
data = i2c.read_byte(0x50)
return data
self.textEdit.setText(data)
This code does not print anything. I tried it with print data but this prints result in python shell. Anyone can help?
Place this line self.textEdit.setText(data) before return data. Once you return value from method, lines after return will not execute.
Also, if you're going to use textEdit only for output (not for editing) set self.textEdit.setReadOnly(1)