I currently have an existing code in bash that greps a keyword from a config file:
[USER1]
usrcid = 5654654654
usrsid = XDFDFSAS22
usrmid = COMPANYNAME1
usrsrt = secret1
urlenc = http://www.url1.com
[USER2]
usrcid = 5654654667
usrsid = XDFDFSAS45
usrmid = COMPANYNAME2
usrsrt = secret2
urlenc = http://www.url2.com
I store it as a variable and use it for processing the rest of the script. What I want to achieve is to convert the behavior from bash to php and do a curl:
F1=/etc/config/file.txt
CID=`grep "\[USER1\]" -A 5 $F1 | grep usrcid | awk {'print$3'}`
SID=`grep "\[USER1\]" -A 5 $F1 | grep usrsid | awk {'print$3'}`
MID=`grep "\[USER1\]" -A 5 $F1 | grep usrmid | awk {'print$3'}`
SRT=`grep "\[USER1\]" -A 5 $F1 | grep usrsrt | awk {'print$3'}`
URI=`grep "\[USER1\]" -A 5 $F1 | grep urlenc | awk {'print$3'}`
echo $CID $SID $MID $SRT $URI
I'm really not a php guru so please excuse the code below but from a general perspective, the below code is my understanding of what I want to achieve:
<?php
include "/etc/config/file.txt"
// *** the equivalent code grep? ***
function get_data($url)
{
$ch = curl_init();
$timeout = 5;
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,$timeout);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
// *** i'm not sure if this one is correct? ***
$returned_content = get_data('$URI/cid=$CID&sid=$SID&mid=$MID&srt=$SRT')
echo $returned_content;
?>
This is my first time to ask in stackoverflow so I would like to thank you in advance!
Include doesn't do what you think it's doing. It won't get the variables you set in the text-file. If it were PHP code in the file you included, it would evaluate that, but in this case, it's only text. See the Manual
What you need is to use the parse_ini_file() function. It takes the config file as first argument, and a boolean flag as the second. The second argument is used to let the function know that you should use sections in your config file, which you do.
Example:
file.txt:
[USER1]
usrcid = 5654654654
usrsid = XDFDFSAS22
usrmid = COMPANYNAME1
usrsrt = secret1
urlenc = http://www.url1.com
[USER2]
usrcid = 5654654667
usrsid = XDFDFSAS45
usrmid = COMPANYNAME2
usrsrt = secret2
urlenc = http://www.url2.com
test.php:
<?php
$config = parse_ini_file("file.txt", true);
print_r($config);
?>
(See the manual for parse_ini_file())
This will load the config file to the $config variable, and it will contain the following:
Array
(
[USER1] => Array
(
[usrcid] => 5654654654
[usrsid] => XDFDFSAS22
[usrmid] => COMPANYNAME1
[usrsrt] => secret1
[urlenc] => http://www.url1.com
)
[USER2] => Array
(
[usrcid] => 5654654667
[usrsid] => XDFDFSAS45
[usrmid] => COMPANYNAME2
[usrsrt] => secret2
[urlenc] => http://www.url2.com
)
)
Now, to construct an URL you could use:
$url = "{$config['USER1']['urlenc']}/cid={$config['USER1']['usrcid']}&sid={$config['USER1']['usrsid']}&mid={$config['USER1']['usrmid']}&srt={$config['USER1']['usrsrt']}";
Or construct a dynamic way of iterating through the array given in the $config variable, to account for several sections. This URL you can run through the cURL function you got.
Related
how can i read file from input file txt from certain line, example from line prefix AT on php example
can u help to continue my script
`
$data = $request->file('file');
$filetmp = $data->getRealPath();
$readfile = file_get_contents($filetmp);
$files = fopen($filetmp,"r");
$filedata = fread($files,filesize($filetmp));
fclose($files);
dd($filedata);
$file=$request->file('file');
$content=File::get($file->getRealPath());
$lines = explode("\n", $content);
$lines=array_slice($lines,
array_keys(
array_filter($lines,
function($item){
return strpos($item,'AT') ;
}))[0]
);
i have command that outputs a collection strings that look like this:
json.formats[0]].url = "https://example.com/ar.html"
json.formats[1].url = "https://example.com/es.html"
json.formats[2s].url = "https://example.com/ru.html"
json.formats[3].url = "https://example.com/pt.html"
json.formats[73].url = "https://example.com/ko.html"
json.formats[1502].url = "https://example.com/pl.html"
(there are many more instances, however for simpilcity's sake theyve been removed)
i can use the command below
myCmd | grep -e 'json\.formats\[.*\]\.url\ \=\ '
however i only want the wildcard to match integers, and to throw out non-integer matches. it gives me the following:
json.formats[0]].url = "https://example.com/ar.html"
json.formats[1].url = "https://example.com/es.html"
json.formats[2s].url = "https://example.com/ru.html"
json.formats[3].url = "https://example.com/pt.html"
json.formats[73].url = "https://example.com/ko.html"
json.formats[1502].url = "https://example.com/pl.html"
what i really want is this:
json.formats[1].url = "https://example.com/es.html"
json.formats[3].url = "https://example.com/pt.html"
json.formats[73].url = "https://example.com/ko.html"
json.formats[1502].url = "https://example.com/pl.html"
Thanks :-)
You may use:
myCmd | grep -E 'json\.formats\[[[:digit:]]+\]\.url = '
or:
myCmd | grep -E 'json\.formats\[[0-9]+\]\.url = '
[[:digit:]] is equivalent of [0-9] for most of the locales.
I am writing a network monitoring script in bash. The base command I am using is ettercap -T -M ARP -i en1 // //. Then I pipe egrep --color 'Host:|GET' into it.
A sample output I am getting looks like this:
GET /images/srpr/logo11w.png HTTP/1.1.
Host: www.google.com.
GET /en-us/us/products HTTP/1.1.
Host: www.caselogic.com.
My desired output is this:
Title: logo11w.png
URL: www.google.com/images/srpr/logo11w.png HTTP/1.1.
Title: Products - Case Logic
URL: www.caselogic.com/en-us/us/products
Things to notice: HTTP/1.1. and the . at the end of the host are gone. They also are formed into one URL and there is a blank line after each Title/URL listing. I attempted forming them into one URL by parsing the commands output into a variable with
var=`sudo ettercap -T -M ARP -i en1 // // | egrep --color 'Host:|GET'` | echo $var
but obviously that doesn't work because the input to the variable is a command the isn't done until the user requests a stop (CTRL + C).
To get the title of an HTML page, I use the command wget -qO- 'https://url.goes/here' | perl -l -0777 -ne 'print $1 if /<title.*?>\s*(.*?)\s*<\/title/si'. If it is something that doesn't have a title, such as an image, no title is fine.
Any help is greatly appreciated, and sorry if what I wrote is hard to read, feel free to ask questions.
Try this:
title_host.pl
#!/usr/bin/env perl
use warnings;
use strict;
use WWW::Mechanize;
my $mech = WWW::Mechanize->new();
my ($get,$host,$title);
while (<>) {
if (m|^GET (\S+) |) {
$get = $1;
} elsif ( m|^Host: (\S+)\.| ) {
$host = $1;
} else {
# Unrecognized line...reset
$get = $host = $title = '';
}
if ($get and $host) {
my ($title) = $get =~ m|^.*\/(.+?)$|; # default title
my $url = 'http://' . $host . $get;
$mech->get($url);
if ($mech->success) {
# HTML may have title, images will not
$title = $mech->title() || $title;
}
print "Title: $title\n";
print "URL: $url\n";
print "\n";
$get = $host = $title = '';
}
}
input
GET /images/srpr/logo11w.png HTTP/1.1.
Host: www.google.com.
GET /en-us/us/products HTTP/1.1.
Host: www.caselogic.com.
now just pipe your input into the perl script:
cat input | perl title_host.pl
output:
Title: logo11w.png
URL: http://www.google.com/images/srpr/logo11w.png
Title: Products - Case Logic
URL: https://www.caselogic.com/en-us/us/products
How do I make a Bash script that will copy all links (non-download website). The function is only to get all the links and then save it in a txt file.
I've tried this code:
wget --spider --force-html -r -l1 http://somesite.com | grep 'Saving to:'
Example: there are download links within a website (for example, dlink.com), so I just want to copy all words that contain dlink.com and save it into a txt file.
I've searched around using Google, and I found none of it useful.
Using a proper parser in Perl:
#!/usr/bin/env perl -w
use strict;
use LWP::UserAgent;
use HTML::LinkExtor;
use URI::URL;
my $ua = LWP::UserAgent->new;
my ($url, $f, $p, $res);
if(#ARGV) {
$url = $ARGV[0]; }
else {
print "Enter an URL : ";
$url = <>;
chomp($url);
}
my #array = ();
sub callback {
my($tag, %attr) = #_;
return if $tag ne 'a'; # we only look closer at <a href ...>
push(#array, values %attr) if $attr{href} =~ /dlink\.com/i;
}
# Make the parser. Unfortunately, we don’t know the base yet
# (it might be diffent from $url)
$p = HTML::LinkExtor->new(\&callback);
# Request document and parse it as it arrives
$res = $ua->request(HTTP::Request->new(GET => $url),
sub {$p->parse($_[0])});
# Expand all URLs to absolute ones
my $base = $res->base;
#array = map { $_ = url($_, $base)->abs; } #array;
# Print them out
print join("\n", #array), "\n";
Is there a perl script to add owner's/authors name of the file?
my $owner = getpwuid((stat($file))[4]);
see stat and getpwuid for more detail.
Update: for Windows,
from this post: http://www.perlmonks.org/?node_id=865219
use Win32::OLE;
my $objShell = Win32::OLE->CreateObject("Shell.Application");
my $objFolder=$objShell->Namespace("c:\\a") or die "$!" ;
my $a = $objFolder->ParseName("a.txt") or die "$!" ;
print $objFolder->GetDetailsOf($a, 8) or die "$!" ;
or,
use Win32::Perms;
my $username = Win32::Perms->new($filename)->Owner;
#!/usr/bin/perl -w
my #sb = stat "/etc/passwd";
my $userid = $sb[4];
my #pwent = getpwuid $userid;
my $username = $pwent[0];
print "/etc/passwd is owned by $username\n";
$ /tmp/foo.pl
/etc/passwd is owned by root
The perldoc perlfunc guide has lots of information on these families of functions.