I would like to use Automator to:
1- extract URLs from a text file with about 50 URLs
2- open it in firefox
3- take a screenshot of the window
4- Close the window
5- do it a again for the next 49 URLs.
First step, I can't extract urls from the text files, automator give
me nothing when I do it.
Well, this is done know, mistake from me I had to use get content of text edit document before extract url.
Second thing, I don't know how to make it recursively URL after URL.
Know it opens all urls at the same time in different tabs, which make my firefox to shut down because of the number of tab open at the same time.
How could I make it do it url after url ?
It's the first time I use Automator and I don't know nothing about apple scripting.
Any help?
No need for Automator, just use webkit2png which you can install easily with homebrew like this:
brew install webkit2png
Then put a list of all your sites in a file called sites.txt that looks like this:
http://www.google.com
http://www.ibm.com
and then run webkit2png like this:
webkit2png - < sites.txt
Or, if you don't like that approach, you can do something like this just with the built-in tools in OS X. Save the following in a text file called GrabThem
#!/bin/bash
while read f
do
echo Processing $f...
open "$f"
sleep 3
screencapture ${i}.png
((i++))
done < sites.txt
Then make it executable in Terminal (you only need do this once) with
chmod +x GrabThem
Then run it like this in Terminal:
./GrabThem
and the files will be called 1.png, 2.png etc.
You can see the newest files at the bottom of the list is you run:
ls -lrt
You may want to look at the options for screencapture, maybe the ones for selecting a specific window rather than the whole screen. You can look at the options by typing:
man screencapture
and hitting SPACE to go forwards a page and q to quit.
Related
I've been using a script through cron to check if a web page has changed.
Everything is running smoothly but this is my first attempt and I know it could be better. I'm a noob so take it easy.
I've hobbled this together through many sources. The standard services that check if a webpage has changed didn't work on my webpage of instance because each instance created a new shopping cart ID. Instead, I looked for alike with say article:modified_time that was my TextofInterest and it will grab that whole line and compare it to the same line in the prior file.
Things to maybe do better:
I'd like to define each file (.txt, .html, websites) at the beginning.
I've also run into some other code where it looks like it might not have to save to a file and run in memory?
Currently saving to a flash drive (open media vault), I'd like to change the directory to a different drive for the writes of the files.
Any other improvements are welcome.
Here is working code:
#!/bin/bash
cp Current.html Prior.html
wget -O Current.html 'https://websiteofinterest.com/
awk '/TextOfInterest/' Prior.html > Prior.txt
awk '/TextOfInterest/' Current.html > Current.txt
diff -q Current.txt Prior.txt || diff Current.txt Prior.txt | mail -s "Website Canged" "Emailtosendto#email.com"
I do not have much experiance with the command line and I have done my research and still wasn't able to solve my problem.
I need to download a .txt file from a folder on box.com.
I attempted using:
$ curl -o FILE URL
However, all I got was a empty text file that was named random numbers. I assumed the reason this happened is because the url of the file location does not end in .txt since it is in a file on box.com.
I also attempted:
$ wget FILE URL
However, my mac terminal doesn't seem to find that command
Is there a different command that can download the file from box.com? Or am I missing something?
You need to put your URL in quotes to avoid shell trying to parse it:
curl -o myfile.txt "http://example.com/"
Update: If the URL requires authentication
Modern browsers allow you to export requests as curl commands.
For example, in Chrome, you can:
open your file URL in a new tab
open Developer tools (View -> Developer -> Developer Tools)
switch to Network tab in the tools
refresh the page, a request should appear in the "Network" tab
Right-click the request, choose "Copy -> Copy as cURL"
paste the command in the shell
Here's how it looks for this page for example:
If there is no way to find out is there any way to find out when a file was first created on a Mac?
Mac shows the created time when you select a file in Finder. Almost always editors depend on os provided attributes for this.
There are at least two ways to determine this, and for any file in OSX.
The first option works if you are familiar with Terminal and navigation in Unix (Bash etc on OSX). Use the list files "ls" command.
Navigate to the folder your pycharm python file is contained in. Use the ls command to list the contents of that folder (directory) and include the options t,r and U.
For example:
ls -alhtrU
This instructs the ls command to list:
"a" both visible and invisible files in the directory,
"l" in long (single column down the page) format,
"h" with human readable file sizes,
"t" listed in order of time modified/last accessed/created,
"r" in reverse so most recently created file is at the bottom of the list and hence visible if the list is long as nearest to your command prompt. Finally add the
"U" directing the ls command to use the date the file was
created as the time information for ordering and displaying the
files.
This method is not perfect. If the file was created last calendar year, only the year is displayed. If the file was created this calendar year, the created date and time to the minute is displayed. If you include an r in the ls command as suggested, the most recently created files appear in the ls list at the bottom (reverse order). This is helpful if there are many files in that folder/directory and your files of interest were created recently compared to the other files in that directory.
There is likely a different unix command to show the creation date and time of the particular file your interested in.
Learning the options available for basic Unix commands can be very helpful. This and other options for the ls command can be found by entering in Terminal.
man ls
This gives you the manual page for the ls command. Press "q" to exit when your finished reading to return to the Terminal command line. Or open a second Terminal window to load man pages so that you can reference your options in one terminal window while practicing them in the command line in another.
The second option is to open the folder the file your interested in sits in, in the OSX GUI.
Open the folder, then go to the Finder Menu, under View, select View Options. You can tick the box to show file "Date Created".
This solution saves you the time required to learn more about the ls Unix command and has the benefit of a real time update as you create new files in that folder, which may be desirable. However, as downside, if your interested in invisible files (begin with a "." as shown in ls command in Terminal), then these will not be visible without additional OSX tweaks. An alternative here is using Finder, Find, for that folder specifically and using the more detailed options available in Find.
I drag a lot of graphic files from Finder directly into InDesign and Photoshop. I use a vey simple bash script to quickly open the directory containing the file.
cd "/Volumes/Server/Resources/stock1/"
open .
The script opens the correct directory, but I would like to know how to get it to also go to a specified file (e.g., image.eps) and highlight/select it.
The directories I work with contain hundreds of files and have hard-to-look-through names. This would be a huge time-saver.
Thanks so much for any help. I'm using Mac OSX 10.9.5.
Use the -R (aka --reveal) option to select a single file:
open -R "/Volumes/Server/Resources/stock1/image.eps"
Something like,
open -R "/Volumes/Server/Resources/stock1/"*.eps
will not select all eps files in the folder, but instead will select each one successively, so that the end result is only the last file is selected.
#chepner's answer (-R option) is great if you want to highlight just one file. If you want to select multiple files, you may want to use Apple Script like this:
osascript -e 'tell application "Finder" to select files in folder "stock1" of folder "PHOTOS and IMAGES" of disk "Server" whose name ends with ".eps"'
I'd like to download multiple numbered images from a website.
The images are structured like this:
http://website.com/images/foo1bar.jpg
http://website.com/images/foo2bar.jpg
http://website.com/images/foo3bar.jpg
... And I'd like to download all of the images within a specific interval.
Are there simple browser addons that could do this, or should I use "wget" or the like?
Thank you for your time.
Crudely, on Unix-like systems:
#!/bin/sh
for i in {1..3}
do
wget http://website.com/images/foo"$i"bar.jpg
done
Try googling "bash for loop".
Edit LOL! Indeed, in a haste I omitted the name of the very program that downloads the image files. Also, this goes into a text editor, then you save it with an arbitrary file name, make it executable with the command
chmod u+x the_file_name
and finally you run it with
./the_file_name