So my home made captcha displays the text in a straight line but I think because of this, the bots can still sign up - they're getting in.
Can someone show me how to revise my code below to add a function or something that would make each letter display a little higher or lower than the other letters? Please don't mind the dirty coding as I'm not a PHP professional. Thank you everyone for your help.
$image = imagecreatetruecolor(70, 20);
for ($i=0; $i < rand(20,40); $i++) {
$x = rand(0, 70);
$y = rand(0, 20);
imageline($image, $x, $y, $x+rand(0,10), $y+rand(0,240), imagecolorallocate($image, rand(0,255),rand(0,190),rand(1,90)));
imageline($image, $x, $y, $x+rand(0,11), $y+rand(0,34), imagecolorallocate($image, 255,rand(50,240),rand(241,240)));
}
$s = rand(1, 240);
$x = rand(50, 240);
$f = rand(1, 4);
$d = rand(1, 1);
$c = rand(1, 4);
imagestring($image, $c, 3, $f, $_GET["T"], imagecolorallocate($image, $s,$x,$s));
imagestring($image, $c, 4, $f, $_GET["T"], imagecolorallocate($image, 255,rand(50,240),rand(241,240)));
imagecolortransparent($image, imagecolorallocate($image, 255, 0, 0));
imageinterlace($image);
header("Content-type: image/gif");
imagegif($image);
imagedestroy($image);
Le code...
$image = imagecreatetruecolor(70, 20);
for ($i=0; $i < rand(20,40); $i++) {
$x = rand(0, 70);
$y = rand(0, 20);
imageline($image, $x, $y, $x+rand(0,10), $y+rand(0,240), imagecolorallocate($image, rand(0,255),rand(0,190),rand(1,90)));
imageline($image, $x, $y, $x+rand(0,11), $y+rand(0,34), imagecolorallocate($image, 255,rand(50,240),rand(241,240)));
}
$s = rand(1, 240);
$x = rand(50, 240);
$f = rand(1, 4);
$d = rand(1, 1);
$pos_x = rand(5,10);
$strArr = str_split($_GET["T"]);
foreach ($strArr as $str){
$font_size = rand(1, 5); // range from 1-5
imagestring($image, $font_size, $pos_x, $f, $str, imagecolorallocate($image, $s,$x,$s));
imagestring($image, $font_size, $pos_x, $f, $str, imagecolorallocate($image, 255,rand(50,240),rand(241,240)));
$pos_x = $pos_x + rand(20,30); //adjust your letter spaceing, depend on max char per chaptcha
}
imagecolortransparent($image, imagecolorallocate($image, 255, 0, 0));
imageinterlace($image);
header("Content-type: image/gif");
imagegif($image);
imagedestroy($image);
nothing special, but functional by your requirements :)
added rand size and rand letter space :)
Related
The graphicsmagick package for perl is very extensive but the documentation appears light (or confusing to me) If I have an image and I want to ensure the size is within 800x200 (like https://via.placeholder.com/800x200) while maintaining the aspect ratio of the image, which commands should I use?
In php I have
$tmpImageFile->scaleImage(100,0);
$imageGeometry = $tmpImageFile->getImageGeometry();
$imageHeight = $imageGeometry['height'];
if ( $imageHeight > 100 ) {
$tmpImageFile->scaleImage(0,100);
}
Maybe you could do something like this:
use feature qw(say);
use strict;
use warnings;
use Graphics::Magick;
my $img = Graphics::Magick->new;
$img->ReadImage('gm.png');
my $height = $img->Get('height');
my $width = $img->Get('width');
say "Height = ", $img->Get('height');
say "Width = ", $img->Get('width');
my $target_width = 800;
my $target_height = 200;
my $factor1 = $target_width/$width;
my $factor2 = $target_height/$height;
say "factor1 = $factor1";
say "factor2 = $factor2";
say "$width x $factor1 = ", $width * $factor1;
my $height1 = $height * $factor1;
say "$height x $factor1 = ", $height1;
my $width2 = $width * $factor2;
say "$width x $factor2 = ", $width2;
say "$height x $factor2 = ", $height * $factor2;
$target_height = $height1 if $height1 < $target_height;
$target_width = $width2 if $width2 < $target_width;
$img->Scale(height => $target_height, width => $target_width);
say "Height = ", $img->Get('height');
say "Width = ", $img->Get('width');
$img->Write('out.jpg');
Here I have created an input image "gm.png" :
When running the script I get output:
Height = 404
Width = 1548
factor1 = 0.516795865633075
factor2 = 0.495049504950495
1548 x 0.516795865633075 = 800
404 x 0.516795865633075 = 208.785529715762
1548 x 0.495049504950495 = 766.336633663366
404 x 0.495049504950495 = 200
Height = 200
Width = 766
And the saved output file out.jpg:
The algorithm of a solution for the problem is quite simple.
It is required to find the smallest scale for the image between scale for width and height. Once scale identified apply it to the image.
use strict;
use warnings;
use Graphics::Magick;
my $fname = shift || die "Provide filename";
my($image,$img);
my($scale,$scaleH,$scaleW);
my $holder = {
width => 800,
height => 200
};
$image = Graphics::Magick->new;
$image->ReadImage($fname);
$img->{height} = $image->Get('height');
$img->{width} = $image->Get('width');
$scaleH = $holder->{height} / $img->{height};
$scaleW = $holder->{width} / $img->{width};
$scale = $scaleH < $scaleW ? $scaleH : $scaleW;
$image->Scale( height => $img->{height}*$scale, width => $img->{width}*$scale );
$image->write('image_new.jpg');
exit 0;
For a test was taken a random file from internet
Result verification
$ file image.png
image.png: PNG image data, 2250 x 585, 8-bit/color RGB, non-interlaced
$ file image_new.jpg
image_new.jpg: JPEG image data, JFIF standard 1.01, resolution (DPI), density 72x72, segment length 16, baseline, precision 8, 769x200, components 3
I am trying to implement SURF detection and tracking using FlannBased matcher.
My code is working properly for detection part but the issue is with tracking.
You can see in the above image the tracking rectangle is not focusing on the right object. And moreover the rectangle stays static even when i move my camera around. I am not sure where am i going wrong.
Here is the code i have implemented
void surf_detection::surf_detect(){
UMat img_extractor, snap_extractor;
if (crop_image_.empty())
cv_snapshot.copyTo(dst);
else
crop_image_.copyTo(dst);
//dst = QImagetocv(crop_image_);
imshow("dst", dst);
Ptr<SURF> detector = SURF::create(minHessian);
Ptr<DescriptorExtractor> extractor = SURF::create(minHessian);
cvtColor(dst, src, CV_BGR2GRAY);
cvtColor(frame, gray_image, CV_BGR2GRAY);
detector->detect(src, keypoints_1);
//printf("Object: %d keypoints detected\n", (int)keypoints_1.size());
detector->detect(gray_image, keypoints_2);
//printf("Object: %d keypoints detected\n", (int)keypoints_1.size());
extractor->compute(src, keypoints_1, img_extractor);
// printf("Object: %d descriptors extracted\n", img_extractor.rows);
extractor->compute(gray_image, keypoints_2, snap_extractor);
std::vector<Point2f> scene_corners(4);
std::vector<Point2f> obj_corners(4);
obj_corners[0] = (cvPoint(0, 0));
obj_corners[1] = (cvPoint(src.cols, 0));
obj_corners[2] = (cvPoint(src.cols, src.rows));
obj_corners[3] = (cvPoint(0, src.rows));
vector<DMatch> matches;
matcher.match(img_extractor, snap_extractor, matches);
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for (int i = 0; i < img_extractor.rows; i++)
{
double dist = matches[i].distance;
if (dist < min_dist) min_dist = dist;
if (dist > max_dist) max_dist = dist;
}
//printf("-- Max dist : %f \n", max_dist);
//printf("-- Min dist : %f \n", min_dist);
vector< DMatch > good_matches;
for (int i = 0; i < img_extractor.rows; i++)
{
if (matches[i].distance <= max(2 * min_dist, 0.02))
{
good_matches.push_back(matches[i]);
}
}
UMat img_matches;
drawMatches(src, keypoints_1, gray_image, keypoints_2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
if (good_matches.size() >= 4){
for (int i = 0; i<good_matches.size(); i++){
//get the keypoints from good matches
obj.push_back(keypoints_1[good_matches[i].queryIdx].pt);
scene.push_back(keypoints_2[good_matches[i].trainIdx].pt);
}
}
H = findHomography(obj, scene, CV_RANSAC);
perspectiveTransform(obj_corners, scene_corners, H);
line(img_matches, scene_corners[0], scene_corners[1], Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[1], scene_corners[2], Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[2], scene_corners[3], Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[3], scene_corners[0], Scalar(0, 255, 0), 4);
imshow("Good matches", img_matches);
}
Your matches are correct, you are simply displaying them wrong. Matches refers to the gray_image coordinate system, but you're displaying them in the img_matches coordinate system.
So, basically, you need to translate them by the src width:
line(img_matches, scene_corners[0] + Point2f(src.cols,0), scene_corners[1] + Point2f(src.cols,0), Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[1] + Point2f(src.cols,0), scene_corners[2] + Point2f(src.cols,0), Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[2] + Point2f(src.cols,0), scene_corners[3] + Point2f(src.cols,0), Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[3] + Point2f(src.cols,0), scene_corners[0] + Point2f(src.cols,0), Scalar(0, 255, 0), 4);
See also this related answer.
I'm trying to use shadowCascade for directional lights. Using any code of the examples mentioned in https://github.com/mrdoob/three.js/issues/1888 results in a shader error + a periodical error Object [object Object] has no method 'decompose'.
Since it's a seldomly used and undocumented feature I have no clue where to even begin with debugging.
Even leaving all the code out and enabling shadowCascade in the console for the light while the scene is running results in the periodical error from above showing up.
Any help would be greatly appreciated!
Kind regards,
Doidel
PS: People always want to see some code. So here's some code.
var sunlight = new THREE.DirectionalLight();
sunlight.intensity = 0.5;
sunlight.position.set(100, 300, 100);
sunlight.castShadow = true;
sunlight.shadowBias = -0.0001;
sunlight.shadowMapWidth = sunlight.shadowMapHeight = 2048;
sunlight.shadowDarkness = 0.7;
var d = 250;
sunlight.shadowCameraLeft = -d;
sunlight.shadowCameraRight = d;
sunlight.shadowCameraTop = d;
sunlight.shadowCameraBottom = -d;
sunlight.shadowCameraNear = 200;
sunlight.shadowCameraFar = 800;
sunlight.shadowDarkness = 0.6;
sunlight.shadowBias = 0.000065;
sunlight.shadowCascade = true;
sunlight.shadowCascadeCount = 3;
sunlight.shadowCascadeNearZ = [ -1.000, 0.9, 0.975 ];
sunlight.shadowCascadeFarZ = [ 0.9, 0.975, 1.000 ];
sunlight.shadowCascadeWidth = [ 2048, 2048, 2048 ];
sunlight.shadowCascadeHeight = [ 2048, 2048, 2048 ];
sunlight.shadowCascadeBias = [ 0.00005, 0.000065, 0.000065 ];
sunlight.shadowCascadeOffset.set( 0, 0, -10 );
scene.add( sunlight );
sunlight.lookAt(new THREE.Vector3(0,0,0));
There was another discussion about the usage of shadowCascade with the conclusion to currently not use shadowCascade, due to not being maintained.
I'm on Windows 7, and i am trying to display an icon with transparency on my contextual menu but it doesn't work.
I am trying to use LoadImage like this :
m_hMenuBmp = (HBITMAP)::LoadImage(g_hInst, L"C:\\Users\\nicolas\\AppData\\Roaming\\MyApp\\icon.bmp", IMAGE_BITMAP, 16, 16, LR_LOADFROMFILE | LR_LOADTRANSPARENT );
and my icon.bmp is set to 256 colors with white ( 255, 255, 255 ) on background ...
I don't know why this isn't working ...
I tried the ARGB Method of Raymon Chen but it didn't work neither !
int cx = GetSystemMetrics(SM_CXSMICON);
int cy = GetSystemMetrics(SM_CYSMICON);
BITMAPINFO bmi = {0};
bmi.bmiHeader.biSize =sizeof(bmi.bmiHeader);
bmi.bmiHeader.biWidth = cx;
bmi.bmiHeader.biHeight = cy;
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 32;
bmi.bmiHeader.biCompression = BI_RGB;
DWORD *pBits;
m_hMenuBmp = CreateDIBSection(NULL, &bmi, DIB_RGB_COLORS, (void **)&pBits, NULL , 0);
if (m_hMenuBmp)
{
for (int y = 0; y < cy ; y++ )
{
for (int x = 0; x < cx; x++)
{
BYTE bAlpha = x * x * 255 / cx / cx;
DWORD dv = (bAlpha << 24) | (bAlpha << 16) | bAlpha ;
pBits[y *cx + x] - dv;
}
}
}
And I don't know why ... my icon isn't displayed with this method ..
I found a way to did this easily :
HICON hIcon = (HICON)LoadImage( NULL, L"icon.ico", IMAGE_ICON, 16, 16, LR_LOADFROMFILE );
HDC hDC = ::GetDC( NULL );
m_hMenuBmp = ::CreateCompatibleBitmap( hDC, 16, 16 );
HDC hDCTemp = ::CreateCompatibleDC( hDC );
::ReleaseDC( NULL, hDC );
HBITMAP hBitmapOld = ( HBITMAP ) ::SelectObject( hDCTemp, m_hMenuBmp );
::DrawIconEx( hDCTemp, 0, 0, hIcon, 16, 16, 0, ::GetSysColorBrush( COLOR_MENU ), DI_NORMAL );
::SelectObject( hDCTemp, hBitmapOld );
::DeleteDC( hDCTemp );
I was able to get this to work:
HBITMAP hBitmap = (HBITMAP)::LoadImage(NULL, "C:\\moo\\res\\bitmap1.bmp", IMAGE_BITMAP, 0, 0, LR_LOADFROMFILE | LR_LOADTRANSPARENT | LR_LOADMAP3DCOLORS);
m_pic.SetBitmap(hBitmap);
The trick was LR_LOADMAP3DCOLORS together with LR_LOADTRANSPARENT. This was for a dialog box, by the way. Without LR_LOADMAP3DCOLORS, my white background stayed white.
I'm working on allowing users to upload profile pictures for my site. The classic example of what I'm trying to avoid is plentyoffish.com where each users image is skewed and looks very ugly:
So, how can I progmatically crop/create standard sized versions of an image without the skewing demonstrated above?
Well, you must have a maximum height and width, lets assume the image size you have available is square, say 100x100.
When a user uploads an image get the dimensions of it, then work out which is greater, the height or the width.
Then take the greatest measurement and get the ratio of that measurement to your target measurement, then use that ratio to scale both the height and width.
So if the user uploads a picture of 500 height and 450 width, as the height is greatest you'd divide 100 by 500, your thumbnail size. This gives us .2 as the ratio. which means the width will become 90, so you would shrink to 100x90, and no distortion would occur.
Here's some code (C#) I used to do a resize, similar to the method suggested by blowdart. Just replace the "300"s with the maximum size of one side in your case:
private Bitmap ScaleImage(Image oldImage)
{
double resizeFactor = 1;
if (oldImage.Width > 300 || oldImage.Height > 300)
{
double widthFactor = Convert.ToDouble(oldImage.Width) / 300;
double heightFactor = Convert.ToDouble(oldImage.Height) / 300;
resizeFactor = Math.Max(widthFactor, heightFactor);
}
int width = Convert.ToInt32(oldImage.Width / resizeFactor);
int height = Convert.ToInt32(oldImage.Height / resizeFactor);
Bitmap newImage = new Bitmap(width, height);
Graphics g = Graphics.FromImage(newImage);
g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
g.DrawImage(oldImage, 0, 0, newImage.Width, newImage.Height);
return newImage;
}
OR: If you would still like fixed dimensions, you follow blowdart's instructions but calculate the greatest ratio instead: 100px / 450px = .22..
width: 100px
height: 111.11..px -> crop from floor((111.11 - 100) / 2) on top and 100px down.
EDIT: Or let the user select how to crop the greatest dimension.
Use ImageMagick. On the command line use:
convert -thumbnail geometry
I made this function for PHP a while ago that works great for this and some other scenarios:
<?php
function Image($source, $crop = null, $resize = null)
{
$source = ImageCreateFromString(file_get_contents($source));
if (is_resource($source) === true)
{
$width = imagesx($source);
$height = imagesy($source);
if (isset($crop) === true)
{
$crop = array_filter(explode('/', $crop), 'is_numeric');
if (count($crop) == 2)
{
if (($width / $height) > ($crop[0] / $crop[1]))
{
$width = $height * ($crop[0] / $crop[1]);
$crop = array((imagesx($source) - $width) / 2, 0);
}
else if (($width / $height) < ($crop[0] / $crop[1]))
{
$height = $width / ($crop[0] / $crop[1]);
$crop = array(0, (imagesy($source) - $height) / 2);
}
}
else
{
$crop = array(0, 0);
}
}
else
{
$crop = array(0, 0);
}
if (isset($resize) === true)
{
$resize = array_filter(explode('*', $resize), 'is_numeric');
if (count($resize) >= 1)
{
if (empty($resize[0]) === true)
{
$resize[0] = round($resize[1] * $width / $height);
}
else if (empty($resize[1]) === true)
{
$resize[1] = round($resize[0] * $height / $width);
}
}
else
{
$resize = array($width, $height);
}
}
else
{
$resize = array($width, $height);
}
$result = ImageCreateTrueColor($resize[0], $resize[1]);
if (is_resource($result) === true)
{
ImageCopyResampled($result, $source, 0, 0, $crop[0], $crop[1], $resize[0], $resize[1], $width, $height);
ImageDestroy($source);
header('Content-Type: image/jpeg');
ImageJPEG($result, null, 90);
ImageDestroy($result);
}
}
return false;
}
Image('/path/to/your/image.jpg', '1/1', '100*');
Image('/path/to/your/image.jpg', '1/1', '100*100');
Image('/path/to/your/image.jpg', '1/1', '100*500');
?>
Here's a bash command I threw together to accomplish this using ImageMagick's convert tool. For a set of images sitting in the parent directory, some portrait, some landscape, to create images in the current directory scaled to 600x400, cropping portrait images from the centre and simply scaling the landscape images:
for f in ../*jpg; do
echo $f;
size=`identify $f|cut -d' ' -f 3`;
w=`echo $size|cut -dx -f 1`;
h=`echo $size|cut -dx -f 2`;
if [ $w -gt $h ]; then
convert $f -thumbnail 600x400 `basename $f`;
else
convert $f -scale 600x -crop 600x400+0+`echo "((600*($h/$w))/2)" | bc | sed 's/\..*//'` `basename $f`;
fi;
done;