The question pretty much says it all. I basically want to create a driver that is compatible with my current MacOS (Catalina). The issue I'm facing is that my printer (with scanner) currently will only scan pages and print them out. I'd like to be able to use my scanner to save an image of a scanned document.
I'm honestly not sure if writing a driver is the best way to do this but the manufacturer (Canon) no longer has drivers for this old scanner. But it works just as well as the day we got it so I REALLY don't want to have to toss this one out and buy a new one.
UPDATE: currently stuck with the following:
rabdelazin#rabdelazim Downloads % device=$(sane-find-scanner | awk '/Canon/{print $NF}')
rabdelazin#rabdelazim Downloads % echo $device
libusb:020:029
rabdelazin#rabdelazim Downloads % scanimage --device Canon:$device -x 210 -y 297 --mode color --resolution 240 --format=tiff --depth 8 > ~/Downloads/scan.tiff
scanimage: open of device Canon:libusb:020:029 failed: Operation not supported
I have an EPSON Perfection 4990 Photo on macOS, so I cannot give you full code and examples for your Canon but it may get you started. I spend my life in Terminal rather than using GUIs for anything so I just scan the full area of the platten at full resolution and do whatever I need later with ImageMagick or Photoshop if necessary.
So, to get it going I installed homebrew from here. Then I installed some packages:
brew install libusb
brew install sane-backends
Then I can find my scanner with:
sane-find-scanner
Sample Output
found USB scanner (vendor=0x04b8 [EPSON], product=0x012a [EPSON Scanner]) at libusb:003:002
Now you need the last word on that line, the libusb:003:002 part, with my EPSON, I use:
sane-find-scanner | awk '/EPSON/{print $NF}'
You will need to see what you get, and adapt slightly.
SampleOutput
libusb:003:002
So, in order to scan, I capture that in a bash variable called device and do this:
device=$(sane-find-scanner | awk '/EPSON/{print $NF}')
scanimage --device epson:$device -x 210 -y 297 --mode color --resolution 240 --format=tiff --depth 8 > ~/Desktop/scan.tif
I put the whole lot in a bash script called scan like this:
#/bin/bash
TMP="$HOME/Desktop/scan.tif"
# Find libusb device name
device=$(sane-find-scanner | awk '/EPSON|HP/{print $NF}')
if [ -z $device ]; then
echo ERROR: Unable to find libusb device
exit 1
fi
echo Found scanner at: $device
# Now scan full-size, colour, hi res
scanimage --device epson:$device -x 210 -y 297 --mode color --resolution 240 --format=tiff --depth 8 > "$TMP"
# Check we got a file
if [ ! -s "$TMP" ]; then
echo ERROR: Empty scan
exit 1
fi
My script has some further, optional, ImageMagick stuff at the end to create a Web-usable JPEG, if you add this you will need to do:
brew install imagemagick
Then add this to the script above:
# Copy the file to User's Desktop and number nicely...
# ... save as hi-res 16-bit TIF
# ... and medium res, medium quality JPG for web use
cd ~/Desktop
i=0
while :; do
base=$(printf "scan-%03d" $i)
if [ ! -f "${base}.jpg" ]; then
cp "$TMP" "${base}.tif"
convert "$TMP" -resize 2000x2000 -quality 85% "${base}.jpg"
break
fi
((i++))
done
Here are a couple of resources I found helpful when working it all out. You can debug the scanimage program with:
SANE_DEBUG_SNAPSCAN=128 scanimage -L
This resource was useful.
You can get help like this:
scanimage --help -d epson
Note that you may also be able to use a Raspberry Pi or similar small, low-cost Linux machine as a "scanner server". Basically you would attach your scanner via USB to the Raspberry Pi and run SANE on the Raspberry Pi. Once you get it working, you could run saned which is a daemon service, on the Raspberry Pi, that listens on the network for other devices (such as your Mac) making requests to scan. It does the scan, using its Linux SANE drivers and delivers the image back over the Ethernet to the Mac (or other) client. I know you dislike this option, but there may be future readers...
Keywords: macOS, OSX, scan, scanner, scanning, EPSON, Canon, HP, libusb, SANE, sane-backends
Well after a LOT of trial and error, I've finally come up with a solution.
TL;DR: I made a print server out of a raspberry-pi and installed cups and set the printer to be shared through the server. Works like a charm!
It took quite a bit of investigation but as part of reviving an old laptop, I got it running by installing Ubuntu 20.04. Just for kicks I decided to try and print something from the laptop. I had to install CUPS and maybe a few other packages but it worked. That got me thinking that I should just make a print server that knows how to talk to the printer so all the other machines can come and go but my printer should still work.
I have to uninstall a software and while un-installing that, it prompts to seek an answer as yes or no .. given the sample below.
# /opt/altiris/notification/nsagent/bin/aex-uninstall
This will remove the Symantec Management Agent for UNIX, Linux and Mac software from your system.
Are you sure you want to continue [Yy/Nn]?
Now, as i have multiple Linux systems to do it hence i'm looking for ansible to do the Job for me , So, i have just tested the same with ansible adhoc way as follows an it works with shell module...
# ansible all -m shell -a 'echo "y" | /opt/altiris/notification/nsagent/bin/aex-uninstall'
dev-karn | SUCCESS | rc=0 >>
This will remove the Symantec Management Agent for UNIX, Linux and Mac software from your system.
Are you sure you want to continue [Yy/Nn]?
Uninstalling dependant solutions...
Uninstalling dependant solutions finished.
Removing Symantec Management Agent for UNIX, Linux and Mac package from the system...
Removing wrapper scripts and links for applications...
Sending uninstall events to NS
Stopping Symantec Management Agent for UNIX, Linux and Mac: [ OK ]
Remove non packaged files.
Symantec Management Agent for UNIX, Linux and Mac Configuration utility.
Removing aex-* links in /usr/bin
Removing RC init links and scripts
Cleaning up after final package removal.
Removal finished.
is there a bettwr way to do it...
any ideas will be much appreciated.
You should use Ansible expect module.
The yes utility of linux shell repeatedly outputs y which fulfills the neccessity of every pre-answers seeking prompt by a program and so as to my requirement as the program i'm running that needs only answer is yes to proceed, hence i used it as below with ansible shell module ...
$ ansible all -m shell -a '/bin/yes | /opt/altiris/notification/nsagent/bin/aex-uninstall'
So, even echo "y" is not neeed.
Thnx - Karn
After upgrading to MacOS Sierra (10.12), my sudo command seems to be acting differently. See the following test case:
# Run in terminal pane #1: (should prompt for password)
sudo -v
# Run in terminal pane #2: (should NOT prompt for password)
sudo -v
The above works as expected on earlier versions of OS X. However, on Sierra, the second command prompts for the password again. It does not prompt for the password within the same terminal pane. This seems to only happen for the root user; the following works as expected on all OS versions including Sierra:
# Run in terminal pane #1: (prompts for password)
sudo -v -u "$USER"
# Run in terminal pane #2: (does not prompt for password)
sudo -v -u "$USER"
Looking at /ect/sudoers, the timestamp_timeout value is not set to 0. I've briefly looked over the changelog for 1.7 to 1.8 but could not come up with anything significant other than there being a mention of a policy plugin for Sierra when running sudo -V.
Can anybody help me figure out what has changed? I have a script that relies on the sudo timeout value for a keepalive and on Sierra it is prompting for the password constantly since it seems to no longer use a timestamp for the root user.
After a ton of searching and comparing the sudo configuration on older OS versions to Sierra's (sudo su; sudo -V), it seems that Sierra enables tty_tickets by default now, causing the issues mentioned above. As far as I can tell, this was an undocumented change. To fix, the following needs to be added to the /etc/sudoers file via running sudo visudo,
Defaults !tty_tickets
TLDR; BAD IDEA. This old behavior, while an option to sudo, is used as a default by NO OTHER UNIX-y OS that I have ever encountered. The reason being that it's trivial to exploit, and when exploited, the malignant code doing so will have full control of your system.
Original very long rant-y post, correctly pointed out to be blahdiblah:
LOL, this is funny. I came here from googling because I couldn't remember how I would change the old behavior to this new, correct one (used by every other UNIX-y OS out there). Hadn't even noticed my new Sierra Mac now behaved properly.
I wrote on the Mac forums earlier about this previous behavior which is a gaping security hole. I even supplied a three-line proof-of-concept script that would simply sit around (as a regular user) waiting for a sudo event to appear anywhere, then instantly gain root access to the system. I was booed out of the thread by the fanboys, then got banned from it from calling out lies. Seems Apple were listening, though. Good job, for once, Cupertino. Bad, BAD idea to try to get the old behavior back.
For reference, here's the three-liner. It doesn't do anything malignant, just adds a dummy file to the root of the filesystem once gaining sudo. Run it in a script (or just paste it somewhere which doesn't already have sudo), then either do a sudo in another terminal app/window or app which uses sudo (e.g. TrueCrypt/VeraCrypt or similar), then watch it work.
tail -f -n 0 /var/log/system.log | grep -m 1 -E 'sudo\[[0-9]+\]:\s+'$USER
echo "Gonna play around with root privs ..."
sudo touch /kilroy-was-here
I have a bash script that downloads some files from an ftp server. the problem is that sometimes curl returns errors 6 (can't resolve host) randomly! I can open the ftp via web browser without any problem. I also noticed that the most errors occurs on the first downloads. any idea?
Also I wanted to know that how can I make curl to retry download when these errors occur
Code I used:
curl -m 60 --retry 10 --retry-delay 10 --ftp-method multicwd -C - ftp://some_address/some_file --output ./some_file
note: I also tried the code without --ftp-method multicwd
OS: CentOS 6.5 64bit
while [ "$ret" != "0" ]; do curl [your options]; ret=$?; sleep 5; done
Assuming those are transitional problems with the server and/or DNS, looping might be of some help. This is a particularly good case for the rarely used (?) until loop:
until curl [your options]; do sleep 5; done
In addition, if using curl is not mandatory, maybe wget might be better suited for "unreliable" network connections. From the man:
GNU Wget is a free utility for non-interactive download of files from
the Web. It supports HTTP, HTTPS, and FTP protocols, as well as
retrieval through HTTP proxies.
[...]
Wget has been designed for robustness over slow or unstable network connections; if a download fails due to
a network problem, it will keep retrying until the whole file has been retrieved. If the server supports
regetting, it will instruct the server to continue the download from where it left off.
I've searched around a bit for similar questions, but other than running one command or perhaps a few command with items such as:
ssh user#host -t sudo su -
However, what if I essentially need to run a script on (let's say) 15 servers at once. Is this doable in bash? In a perfect world I need to avoid installing applications if at all possible to pull this off. For argument's sake, let's just say that I need to do the following across 10 hosts:
Deploy a new Tomcat container
Deploy an application in the container, and configure it
Configure an Apache vhost
Reload Apache
I have a script that does all of that, but it relies on me logging into all the servers, pulling a script down from a repo, and then running it. If this isn't doable in bash, what alternatives do you suggest? Do I need a bigger hammer, such as Perl (Python might be preferred since I can guarantee Python is on all boxes in a RHEL environment thanks to yum/up2date)? If anyone can point to me to any useful information it'd be greatly appreciated, especially if it's doable in bash. I'll settle for Perl or Python, but I just don't know those as well (working on that). Thanks!
You can run a local script as shown by che and Yang, and/or you can use a Here document:
ssh root#server /bin/sh <<\EOF
wget http://server/warfile # Could use NFS here
cp app.war /location
command 1
command 2
/etc/init.d/httpd restart
EOF
Often, I'll just use the original Tcl version of Expect. You only need to have that on the local machine. If I'm inside a program using Perl, I do this with Net::SSH::Expect. Other languages have similar "expect" tools.
The issue of how to run commands on many servers at once came up on a Perl mailing list the other day and I'll give the same recommendation I gave there, which is to use gsh:
http://outflux.net/unix/software/gsh
gsh is similar to the "for box in box1_name box2_name box3_name" solution already given but I find gsh to be more convenient. You set up a /etc/ghosts file containing your servers in groups such as web, db, RHEL4, x86_64, or whatever (man ghosts) then you use that group when you call gsh.
[pdurbin#beamish ~]$ gsh web "cat /etc/redhat-release; uname -r"
www-2.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-2.foo.com: 2.6.9-78.0.1.ELsmp
www-3.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-3.foo.com: 2.6.9-78.0.1.ELsmp
www-4.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-4.foo.com: 2.6.18-92.1.13.el5
www-5.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-5.foo.com: 2.6.18-92.1.13.el5
[pdurbin#beamish ~]$
You can also combine or split ghost groups, using web+db or web-RHEL4, for example.
I'll also mention that while I have never used shmux, its website contains a list of software (including gsh) that lets you run commands on many servers at once. Capistrano has already been mentioned and (from what I understand) could be on that list as well.
Take a look at Expect (man expect)
I've accomplished similar tasks in the past using Expect.
You can pipe the local script to the remote server and execute it with one command:
ssh -t user#host 'sh' < path_to_script
This can be further automated by using public key authentication and wrapping with scripts to perform parallel execution.
You can try paramiko. It's a pure-python ssh client. You can program your ssh sessions. Nothing to install on remote machines.
See this great article on how to use it.
To give you the structure, without actual code.
Use scp to copy your install/setup script to the target box.
Use ssh to invoke your script on the remote box.
pssh may be interesting since, unlike most solutions mentioned here, the commands are run in parallel.
(For my own use, I wrote a simpler small script very similar to GavinCattell's one, it is documented here - in french).
Have you looked at things like Puppet or Cfengine. They can do what you want and probably much more.
For those that stumble across this question, I'll include an answer that uses Fabric, which solves exactly the problem described above: Running arbitrary commands on multiple hosts over ssh.
Once fabric is installed, you'd create a fabfile.py, and implement tasks that can be run on your remote hosts. For example, a task to Reload Apache might look like this:
from fabric.api import env, run
env.hosts = ['host1#example.com', 'host2#example.com']
def reload():
""" Reload Apache """
run("sudo /etc/init.d/apache2 reload")
Then, on your local machine, run fab reload and the sudo /etc/init.d/apache2 reload command would get run on all the hosts specified in env.hosts.
You can do it the same way you did before, just script it instead of doing it manually. The following code remotes to machine named 'loca' and runs two commands there. What you need to do is simply insert commands you want to run there.
che#ovecka ~ $ ssh loca 'uname -a; echo something_else'
Linux loca 2.6.25.9 #1 (blahblahblah)
something_else
Then, to iterate through all the machines, do something like:
for box in box1_name box2_name box3_name
do
ssh $box 'commmands_to_run_everywhere'
done
In order to make this ssh thing work without entering passwords all the time, you'll need to set up key authentication. You can read about it at IBM developerworks.
You can run the same command on several servers at once with a tool like cluster ssh. The link is to a discussion of cluster ssh on the Debian package of the day blog.
Well, for step 1 and 2 isn't there a tomcat manager web interface; you could script that with curl or zsh with the libwww plug in.
For SSH you're looking to:
1) not get prompted for a password (use keys)
2) pass the command(s) on SSH's commandline, this is similar to rsh in a trusted network.
Other posts have shown you what to do, and I'd probably use sh too but I'd be tempted to use perl like ssh tomcatuser#server perl -e 'do-everything-on-one-line;' or you could do this:
either scp the_package.tbz tomcatuser#server:the_place/.
ssh tomcatuser#server /bin/sh <<\EOF
define stuff like TOMCAT_WEBAPPS=/usr/local/share/tomcat/webapps
tar xj the_package.tbz or rsync rsync://repository/the_package_place
mv $TOMCAT_WEBAPPS/old_war $TOMCAT_WEBAPPS/old_war.old
mv $THE_PLACE/new_war $TOMCAT_WEBAPPS/new_war
touch $TOMCAT_WEBAPPS/new_war [you don't normally have to restart tomcat]
mv $THE_PLACE/vhost_file $APACHE_VHOST_DIR/vhost_file
$APACHECTL restart [might need to login as apache user to move that file and restart]
EOF
You want DSH or distributed shell, which is used in clusters a lot. Here is the link: dsh
You basically have node groups (a file with lists of nodes in them) and you specify which node group you wish to run commands on then you would use dsh, like you would ssh to run commands on them.
dsh -a /path/to/some/command/or/script
It will run the command on all the machines at the same time and return the output prefixed with the hostname. The command or script has to be present on the system, so a shared NFS directory can be useful for these sorts of things.
Creates hostname ssh command of all machines accessed.
by Quierati
http://pastebin.com/pddEQWq2
#Use in .bashrc
#Use "HashKnownHosts no" in ~/.ssh/config or /etc/ssh/ssh_config
# If known_hosts is encrypted and delete known_hosts
[ ! -d ~/bin ] && mkdir ~/bin
for host in `cut -d, -f1 ~/.ssh/known_hosts|cut -f1 -d " "`;
do
[ ! -s ~/bin/$host ] && echo ssh $host '$*' > ~/bin/$host
done
[ -d ~/bin ] && chmod -R 700 ~/bin
export PATH=$PATH:~/bin
Ex Execute:
$for i in hostname{1..10}; do $i who;done
There is a tool called FLATT (FLexible Automation and Troubleshooting Tool) that allows you to execute scripts on multiple Unix/Linux hosts with a click of a button. It is a desktop GUI app that runs on Mac and Windows but there is also a command line java client.
You can create batch jobs and reuse on multiple hosts.
Requires Java 1.6 or higher.
Although it's a complex topic, I can highly recommend Capistrano.
I'm not sure if this method will work for everything that you want, but you can try something like this:
$ cat your_script.sh | ssh your_host bash
Which will run the script (which resides locally) on the remote server.
Just read a new blog using setsid without any further installation/configuration besides the mainstream kernel. Tested/Verified under Ubuntu14.04.
While the author has a very clear explanation and sample code as well, here's the magic part for a quick glance:
#----------------------------------------------------------------------
# Create a temp script to echo the SSH password, used by SSH_ASKPASS
#----------------------------------------------------------------------
SSH_ASKPASS_SCRIPT=/tmp/ssh-askpass-script
cat > ${SSH_ASKPASS_SCRIPT} <<EOL
#!/bin/bash
echo "${PASS}"
EOL
chmod u+x ${SSH_ASKPASS_SCRIPT}
# Tell SSH to read in the output of the provided script as the password.
# We still have to use setsid to eliminate access to a terminal and thus avoid
# it ignoring this and asking for a password.
export SSH_ASKPASS=${SSH_ASKPASS_SCRIPT}
......
......
# Log in to the remote server and run the above command.
# The use of setsid is a part of the machinations to stop ssh
# prompting for a password.
setsid ssh ${SSH_OPTIONS} ${USER}#${SERVER} "ls -rlt"
Easiest way I found without installing or configuring much software is using plain old tmux. Say you have 9 linux servers. Pick a box as your main. Start a tmux session:
tmux
Then create 9 split tmux panes by doing this 8 times:
ctrl-b + %
Now SSH into each box in each pane. You'll need to know some tmux shortcuts. To navigate, press:
ctrl+b <arrow-keys>
Once your logged in to all your boxes on each pane. Now turn on pane synchronization where it lets you type the same thing into each box:
ctrl+b :setw synchronize-panes on
now when you press any keys, it will show up on every pane. to turn it off, just make on to off. to cycle resize panes, press ctrl+b < space-bar >.
This works alot better for me since I need to see each terminal output as sometimes servers crash or hang for whatever reason when downloading or upgrade software. Any issues, you can just isolate and resolve individually.