Cron job unable to start - shell

I have a cron job
* * * * * /home/username/start.sh that runs every minutes
start.sh contents:
touch /home/username/test.txt
firefox
The cron job changes the timestamp of test.txt but is unable to open firefox. I have tried setting up env variables, but it doesn't seem to work.

You need to point to the absolute path of firefox, which you can find using which firefox as a normal user. The value could for example be /usr/bin/firefox.
Also, firefox is a GUI program, but cron jobs run in the background, and have no idea about X, so you can't start a new Firefox client in this way. For example, consider if you have two desktop logins to the same machine, via the network and locally. Where should Firefox show up when run from cron? It can't show up in both places without a more complex setup such as rdesktop. You can run it on a given display if you add that to the environment, like this (untested):
export DISPLAY=:0.0
/usr/bin/firefox
But Firefox will protest after running it the first time that it's already running, so this is only going to work if you close the window every minute.
What are you trying to accomplish?
If you are trying to check that a web server is running, you can use a shell tool like wget or curl (one of which is very likely installed already) to prod it. If you want to do testing (and you do if you want to develop good software), have a look into frameworks like Selenium and nodeunit.

Related

How can i schedule the automated workflow in Mac

I have tried to change the IP of a server via script and schedule it automatically but it was not giving proper results.So i relied on a tacky method of doing it using "Automator"app in Mac.
I created a workflow using this Automator ,Now i need to schedule this to work every 10 minutes or so.How can i do that in Mac with any scripts or anything. Suggestions will be o great help.
Below is the workflow created.

Debug Firefox on CI server

I build my project on Travis CI. I run tests the following way:
Run Firefox with index.html parameter, which loads script that attempts to connect repeatedly to websocket server.
Run simple websocket server that sends commands to Firefox.
Script in Firefox reads these commands (they may contain some JavaScript code to test), executes JavaScript code.
This works when I run things locally. This also used to work on Travis a couple of weeks ago. However, things that worked for years, had broken unexpectedly. Firefox reports no errors, but node.js server receives no incoming connections for some timeout. I don't know the way to debug the problem. Script that runs in Firefox uses console.log extensively, however I can't retrieve these logs from Travis. Is there any way to get some information from Firefox that runs on CI server?
Note: I run Firefox 53. After things got broken, I tried to upgrade to recent version. Also, I used to run following commands before running Firefox:
export DISPLAY=:99.0
sh -e /etc/init.d/xvfb start
sleep 10
And I tried to remove these lines and use headless more, however this didn't work.
Script that runs in Firefox uses console.log extensively, however I can't retrieve these logs from Travis. Is there any way to get some information from Firefox that runs on CI server?
Firefox 65+ supports a new devtools.console.stdout.content about:config preference, which you can set to true to make the Console output be dumped to stdout (and appear in Travis, I believe).
There doesn't seem to be a good solution for earlier versions: Selenium's driver.get_log() doesn't work in Firefox, and other solutions look unsatisfactory to me.
You seem to run index.html as a file:/// URL, in my opinion that's asking for trouble - I recommend spending time to set up a local https:// server, to save time debugging ever-increasing "security" restrictions the browsers add for non-https content.
If the above doesn't help, try reproducing this with a minimal testcase in a separate repo; if the problem persists, you can share that repo in another question.
You can run vnc with xvfb and connect to it with vncviewer. Here you have some more details: https://www.alexkras.com/debugging-xvfb-server-with-vnc/

Selenium with Windows Release Mangement

In Microsoft Release Management 2013, in the Powershell Executor step we have configured a step to trigger a Windows batch file. This batch file will be executed on a different server, which also happens be our App Server. As an experiment, we have a simple test case to open IE, access the Google homepage URL and close the browser, a screenshot will also be captured. When we run the job from RM, the Selenium logs say that the browser is launched and the test was successful. But when we check the screenshot it is just a black screen. All this runs as the Windows service account user which is the same user that is running RM. This user has no log on privilege.
If I log into the App server with my own ID and execute the batch file manually, the screen shot is captured correctly.
I have read several online posts regarding the black screen. People have said that the screenshot is black because the screen is locked. Does this mean the RM Powershell executor step must be executed with some other credentials instead of service account? If yes, how do we do this? In some suggestions it was mentioned to install VNC. Is that relevant in this situation?
I'm fairly new to Windows. I've mostly been working in Linux systems and I've been requested to debug this issue here. Any pointers/guidance will be appreciated. Thank you!
The deployment agent does not run as an interactive service. You're going to have a lot of trouble getting it to directly invoke Selenium tests. I wrote a blog post a few years ago showing how I accomplished it. Basically, you use Selenium hub to execute the tests interactively from agent machines.
We finally got this to work.
We were invoking the testNG selenium tests within a batch script. This script was specified in RM within the Powershell Executor task. The main point to note is that, in the Powershell executor task, we must first have a cd (change directory) command to change into the directory where the Selenium scripts are. Then specify the complete path to the batch script to be executed in the same Powershell executor task. This cd command is very important. Without this, the batch script would be executed but the selenium step within that wouldn't work. You would just get a vague "configuration errors:1" in the final output.
We took care of website authentication using AutoIT for IE browser.

Run an AppleScript from a server/in the cloud

Is there a possibility to run AppleScripts from a server or from a cloud service?
I want to have some scripts that can run if my computer is sleeping/off.
I looked around a bit on Google, but haven't found anything promising.
If this doesn't exist I basically need to remove the password from my computer and wake up the computer whenever the script needs to run.
It largely depends on what you want to do with the script. There are a few options.
You can use 'stay open' script bundles that, for example, check a certain folder and run when you interact with this folder
You can launch certain scripts when the server boots.
You need to have a server that is always on for this to work. I have this running myself and it works just fine. However, as I said before, it largely depends on what you want to do with your scripts.

Script to automatically sync on directory/file modification in between Mac OSX machine and Linux machine

I have some source code on my Mac, and in order to test I'm interested in synchronizing it with a VM containing a similar web server setup to the production environment. Therefore I need to be able to automatically copy files over to the VM every time there are changes.
I know I can use rsync to do this manually whenever a script is run but I need some way of getting it to run in the background every single time a file in a particular directory or one of its sub-directories is modified.
I know inotifywait exists on Linux machines and could solve this problem. I've also read about the FSEvents API and kqueue. However, none of these seem to be accessible from the command line and I really don't want to spend a long time making something to do this...
I guess I could use a cronjob but a minute is a pretty long time to wait to see changes on a website...
Any ideas?
I do this all the time, developing on a Windows/Linux/Mac workstation, and saving changes to a remote Linux server where they're immediately served back to my workstation's browser for testing. You've got a couple options:
You could mount the remote files locally (like via sshfs) and make changes directly to them. I.e., your Mac thinks the files are local, so you can edit them with your GUI editor, but when you File->Save, it actually saves the file remotely. The main downside to this is that you can't work when disconnected from the server.
Mount the local files remotely. This would allow you to work locally while disconnected but won't allow the test site to work when disconnected -- which may not be a big deal. This option might not be doable if you don't have the right tools/access on the remote server.
(My preference.) Use NetBeans IDE, which has a very nice "copy to remote" feature. You maintain a full copy of all files locally, and edit them directly. When you hit File->Save on a file, NetBeans will save it locally and transparently scp/ftp it to your remote server.
How about using a DVCS like git or mercurial, and having the local repo run post-commit hooks to run the rsync and then the test itself?
I'm a bit confused about why you can't just run rsync from the same script that runs the test. If you run rsync -e ssh you can set up automatic public key authentication between the VM and the Mac. There won't be anything manual about the rsync in that case.
You might be able to set up a launchd agent to do what you want for a simple setup. See this question and the man page for launchd.plist for more information about the launchd WatchPath key. But it looks like WatchPath may not work for changes within sub-directories.

Resources