Make JavaScript bookmarklet open in new tab/window? - bookmarklet

I got a bookmarklet from Dirpy. When you're on a YouTube video, and you click it, it automatically takes you to the Dirpy website to download the video. Is there a way to make it open in a new window/tab?
I've tried a few very simple things, but I have no idea about JavaScript, so they didn't work.
Here's the script:
javascript:%20/*_Dirpy_Studio_Bookmarklet_*/(function(){var%20b=document.getElementsByTagName("head")[0];var%20c=new%20Date().getTime();var%20a=document.createElement("script");a.src="http://dirpy.com/js/studio-bookmarklet.js?"+c;a.onload=a.onreadystatechange=function(){if(!loaded&&(!this.readyState||this.readyState=="loaded"||this.readyState=="complete")){a.onload=a.onreadystatechange=null;b.removeChild(a)}};b.appendChild(a)})();
Thanks!

The redirect to dirpy.com is done in the external script, so unless you rewrite that, no.

Related

How do I make A unity2d game playable with a chrome extension

I know no java or any other language besides c#, so how would I try to make it a popup chrome extension that you can play in your browser.
Maybe you can read this: https://developer.chrome.com/docs/extensions/mv3/getstarted/
I'm pretty sure that in Unity, you can build an application as WebGL. You can read this if you want: https://docs.unity3d.com/Manual/webgl-building.html
Then, you can put the code they give you for the WebGL and put it inside the extensions HTML, and then I think it'll work.

How to download pdf file in ruby without .pdf in the link

I need to download a pdf from a website which does not provide a link ending with (.pdf) using ruby. Manually, when i click on the link to download the pdf, it takes me to a new page and the dialog box to save/open the file appears after some time.
Please help me in downloading the file.
The link
You an do this
require 'open-uri'
File.open('my_file_name.pdf', "wb") do |file|
file.write open('http://someurl.com/2013-1-2/somefile/download').read
end
I have been doing this for my projects and it works.
If you just need a simple ruby script to do it, I'd just run wget. Like this exec 'wget "http://path.to.the.file/and/some/params"'
At that point though, you might as well run wget.
The other way, is to just run a get on the page that you know the pdf is at
source = Net::HTTP.get("http://the.website.com", "/and/some/params")
There are a number of other http clients that you could use, but as long as you make a get request to the endpoint that the pdf is at, it should give you the raw data. Then you can just rename the file, and you'll have the pdf
In your case, I ran the following commands to get the pdf
wget http://www.lawcommission.gov.np/en/documents/prevailing-laws/constitution/func-download/129/chk,d8c4644b0f086a04d8d363cb86fb1647/no_html,1/
mv index.html thefile.pdf
Then open the pdf. Note that these are linux commands. If you want to get the file with a ruby script, you could use something like what I previously mentioned.
Update:
There is an added complication that was not initially stated, which is that the url to the pdf changes every time there is an update to the pdf. In order to make this work, you probably want to do something involving web scraping. I suggest nokogiri. This way you can look at the page where the download is and then perform a get request on the desired URL. Furthermore, the server that hosts the pdf is misconfigured, and breaks chrome within a few seconds of opening the page.
How to solve this problem: I went to the site, and refreshed it. Then broke the connection to the server (press the X where there would otherwise be a refresh button). Then right click next to the download link, and select inspect element. Then browse the dom to find something that is definitively identifying (like an id). Thankfully, I found something <strong id="telecharger"> Download</strong>. This means that you can use something like page.css('strong#telecharger')[0].parent['href'] This should give you a URL. Then you can perform a get request as described above. I don't have time to make the script for you (too much work to do), but this should be enough to solve the problem.

Using Headless FireFox to Save All HTML files using command line in Linux

Using shell_exec with Xvfb and FireFox currently to capture screen shots. However, need to download the entire html (e.g. Save Page As --> Web Page complete.) to a directory using shell_exec. Have looked at all the different option available in the Mozilla Developers Forums but have not been able to figure out how to do this.
This code appears to be what I might need but where and how is this implemented so it can be accessible in shell_exec?
var file = Components.classes["#mozilla.org/file/local;1"]
.createInstance(Components.interfaces.nsILocalFile);
file.initWithPath("C:\\filename.html");
var wbp = Components.classes['#mozilla.org/embedding/browser/nsWebBrowserPersist;1']
.createInstance(Components.interfaces.nsIWebBrowserPersist);
wbp.saveDocument(content.document, file, null, null, null, null);
The Above Code Source
void saveDocument(
in nsIDOMDocument aDocument,
in nsISupports aFile,
in nsISupports aDataPath,
in string aOutputContentType,
in unsigned long aEncodingFlags,
in unsigned long aWrapColumn
);
The Above Code Source
There is a Stackoverflow manual solution here but it does not address shell_exec:
How to save a webpage locally including pictures,etc
There are few options that I know of, but none that I know are fitting your question exactly..
Open firefox http://yoursite.com from shell, then send keystrokes to firefox using xte or similar method. (This is not headless mode though.)
Download using wget. It can work in recursive manner. Or alternately you can parse the HTML, if it is quite simple web page. If you need to submit form, use curl instead of wget.
Use greasemonkey addon & write a script, which would get loaded on http://some-fake-page.com/?download=http://yoursite.com & then open firefox with that fake-page url.
Develop your own firefox addon to do above work.
There may be other better options for this as well, but I don't know them.

how to make images in my webpages not downloadable

I am wondering how to prevent people from Save image as.. by right-click images on my webpages.
I was thinking about disable right-click, but it seems I have to write javascript code. Is there a easy way to do this?
The simple answer is "you cannot do that". You might be able to put something on the server side that will check the referer before serving the image, but even that is not 100% guaranteed. Moreover, even if you did manage somehow to prevent this, nothing would prevent somebody from taking a screenshot of the browser page and then cropping the image out of it.
I think a much better approach would be to have a server-side url rewriting and processing of the images to add some sort of a visible watermark identifying the images as owned by you and saving a proper copyright information in the EXIF information.
You can make a div that is the same size and height as the image and then you can set the image as the background for that div. That will prevent people from directly downloading the image but they can still enter the url and download it from there. I made a tutorial on this myself right here: http://www.andytechguy.com/tutorials/html/how_to_nodownload/
There's an easy solution for this which I used in my website. Just add oncontextmenu="return false;" attribute to the Image tag and you are done with it!
<img src="https://source.unsplash.com/random" alt="Random image" oncontextmenu="return false;">
This is my first question to be answered on stackoverflow, so please fair with me if I didn't use the right tools...
As long as the image URL is in the source code, the image is downloadable using the unix command wget, or anything similar. I'm not a javascript expert however, but I believe you could read the location of the photo from a text file instead of the URL being hard coded in the HTML. Then you could configure that text file to return a 403 (Permission denied) when attempted to view with whatever web server you are using. This still wouldn't stop screenshots though.
Something like this:
<img src="some javascript to read text file">
Then have the text file contain:
/path/to/obscurely/named/photo.png
Ya, this isn't really possible. Another option is to use Lightroom or something else to batch add watermarks. Watermarks are the only option that I'm aware of that will almost completely protect you, because even the screenshot idea is not really possible unless they are a wizard in Photoshop.
In conclusion I think Lightroom or something else is your best and easiest shot of getting what your looking for.
You can do this by converting the image format from jpg to svg...there are alot of converters online i.e https://convertio.co/jpg-svg/
After this you copy and paste the svg code into your html code to replace the jpg.

ckeditor: mediaembed plugin won't work

I'm using CKEditor for my site.
Now I found the plugin called "MediaEmbed". I need it for embedding YouTube videos.
I installed it and the integration worked fine, but embedding won't work.
When you paste the code into the text area in the embedding dialog and then click on OK in IE and Chrome nothing happens and in Firefox it just adds a image as a flash-content-placeholder.
Let's say the flash-content-placeholder image would be just in the wysiwyg interface, but then i should get the embed code when I click on "view source" - but no, there you just see the source of the placeholder image div and img tag.
Then let's say the embed code is saved internally, so I save the file I create with CKEditor, and the out I get is just what I entered without the stuff the MediaEmbed plugin has generated at all.
How to fix this?
Please help!
Yours Joern.
use firebug and see, it'll be giving a cross domain error. the plugin has a bug. use try catch in the place where is accesses the windows.name property for a workaround.
Try istead ckeditor youtube plugin

Resources