I required pip before installing aws cli so I tried to install pip by using command.
curl -O https://bootstrap.pypa.io/get-pip.py
As soon as I run this command it starts the timer and after every 2 minutes and approx 6 seconds it stops and terminates displaying the message.
Please open this image to see the message
What does this error indicate?
This error indicates that you either don't have access or have limited access to the address https://bootstrap.pypa.io.
If this is an EC2 instance, make sure you are allowing outbond traffic for the port 443 (https).
Related
To install Jupyter Lab Extension on AWS sagemaker, You need to follow https://github.com/aws-samples/amazon-sagemaker-notebook-instance-lifecycle-config-samples/tree/master/scripts. And then create the lifecycle configuration accordingly.
I did it and this is my on-start.sh file.
#!/bin/bash
set -e
# OVERVIEW
# This script installs a jupyterlab extension package in SageMaker Notebook Instance
sudo -u ec2-user -i <<'EOF'
# PARAMETERS
EXTENSION_NAME=#jupyter-widgets/jupyterlab-manager
source /home/ec2-user/anaconda3/bin/activate JupyterSystemEnv
jupyter labextension install $EXTENSION_NAME
source /home/ec2-user/anaconda3/bin/deactivate
EOF
Everything should went smooth except for this extension it raises an error.
This is the error log from cloud watch.
/bin/bash: /tmp/OnStart_2019-06-26-23-3260vo0j6p: /bin/bash^M: bad interpreter: No such file or directory
This one is the error message shown in the sagemaker console.
Failure reason
Notebook Instance Lifecycle Config 'arn:aws:sagemaker:ap-southeast-1:658055165324:notebook-instance-lifecycle-config/jupyter-widgets-for-jupyterlab-copy' for Notebook Instance 'arn:aws:sagemaker:ap-southeast-1:658055165324:notebook-instance/test' took longer than 5 minutes. Please check your CloudWatch logs for more details if your Notebook Instance has Internet access.
I had done several attempts to locate the bug in the script file and the setup file of ipywidgets concerning the 'bad interpreter' error. I cannot find any traces of error in both.
I tried to upgrade my instance to T2 largest instance just in case the error came from the timeout.
The weirdest thing is that I am able to install it via the terminal from the terminal on jupyterlab. I measured the total time it takes to install and found it is around 4 mins just enough time(AWS should allow more time since this is only one extension install). Noted that this installation was performed under the T2 medium instance(the cheapest instance type you could get). If you install it this way to have to reboot the jupyter lab to make it work, then you reboot your instance and everything reverts back to the not-yet-install state. This suggests that there is no way to install jupyter lab extension rather than using the lifecycle cycle configurations which will lead you back to the error.
At this point, I gave up and use the jupyter notebook instead if I really want to use the ipywidgets.
Normally, this should be raised as technical support on AWS, but I have the basic plan so I decided to file it in StackOverflow for others that might encounter the same thing.
copy to notepad++
view>show symbol>show all symbols
replace "/r" with nothing
CRLF should become LF which is valid in unix
copy and paste as plain text!!!
I just installed the aws cli (via pip install awscli per Amazon's installation instructions) on macOS 10.12.5. The installation completed without issue. But when I run the app (e.g. $ aws help) it just hangs for about a minute and finally fails with:
$ aws help
ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out
I've tried running it from both Python 3.6.1 and 2.7.13 environments. The macOS firewall is disabled so I'm not blocking any outbound requests.
I'm not sure what else to try at this point.
That is very strange behaviour. The AWS CLI does not activate ssh.
I would suggest that you have another script called aws somewhere in your PATH that you previously used to connect to an Amazon EC2 instance, and that script is running rather than the AWS CLI.
Run this command to discover which one it is running:
$ which aws
/usr/local/bin/aws
In this case, mine is in $ /usr/local/bin/aws help, but yours is likely to be elsewhere.
To run the correct one, use:
$ /usr/local/bin/aws help
I'm trying to run a Docker container as a Jupyter Notebook on Windows 10. As shown in the screen grab, the notebook appears to be running on localhost:8888, but my browsers (Chrome and Edge) return a 'connection refused' error. I've disabled my firewall (temporarily), but that didn't help. Also, netstat does not list the port as being in use. Any idea what's going on?
Try the following:
docker run -p 8888:8888 -it simonwalkersamuel/bloch_tf:latest
-p 8888:8888 will map container port 8888 to host port 8888.
TLDR make sure you mapped the ports using -p 8888:8888. If didn't work, try 192.168.99.100:8888 instead of localhost:8888.
Situation:
I had a slightly different problem: Although I mapped the ports using -p 8888:8888, I still see the connection error when I try to reach localhost:8888 in all browsers. The firewall is checked and seems OK. It was very confusing because exactly same docker image works on my other Win 10 laptop at work.
Solution:
I have two slightly different Win 10 on my laptops. The one that has connection difficulty runs a Win 10 Home whereas the other one has a Win 10 Professional. This means, the problematic laptop only runs Docker Tools not the conventional Docker CE. Therefore, it maps communicates with the OS using 192.168.99.100 IP not the usual 127.0.0.1 or localhost. So, instead of localhost:8888 just used 192.168.99.100:8888 and it worked.
Confession!
I usually use my work laptop for running Jupyter on docker. Therefore, I did not pay enough attention to the welcome message of Docker Quickstart Terminal which clearly says docker is configured to use the default machine with IP 192.168.99.100. Hopefully, this post helps other too busy (aka careless!) people like me!
Since both laptops have very similar apps installed, I doubt anything rather than the Docker app itself causes the difference in IP addresses.
Try the following commands:
run these two command
pip install --upgrade pip
pip install --upgrade jupyter
I'm working on a bash setup script for CentOS 6.4 machines. On a brand new install I'm running into an issue that seems to be reproducible, but the scenario is unusual.
The setup script is run with root. The first step is to run yum update with no options:
yum update
This completes successfully with a zero exit code. The next step is to retrieve the EPEL rpm using wget:
wget http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
However, this is consistently failing when resolving the host name every time this is run from a clean CentOS install:
wget: unable to resolve host address “dl.fedoraproject.org”
When executing these commands in succession from the command line however, no issues are encountered and wget is able to retrieve the EPEL rpm:
sudo yum update
sudo wget http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
Is there anything that happens during the yum update that could cause the DNS lookup to fail without exiting the script first? If I rerun the script after the first failure, it passes on the second time around.
It's possible when the Time to Live of the domain name expires on the system or on a cache DNS server before the next instance of wget and the next attempt to resolve the domain name from the authorative server fails. See http://en.wikipedia.org/wiki/Time_to_live#DNS_records. Of course it's also possible that the cache DNS server becomes inaccessible.
I have a linux dev server I watch, and lately its chugging at some points so I'd like to keep a better eye on it. I used to use Gkrellm, but its been a pain to try get Gkrellm to build on my Mac.
Besides servering X remotely (which would not be optimal), I guess i'm looking for alternatives to Gkrellm.
I would like a program that will let me watch the I/O CPU, Memory, processes, etc of a remote server running Linux. I am on a Mac.
If you're looking for something simple, and almost certainly already installed on the Linux box, you could SSH into the Linux machine and use tools like top, vmstat, and lsof to see what it's up to.
If you still want to test Gkrellm on Mac, you can follow this procedure
# sudo port install gkrellm
If you have this error :
Error: Target org.macports.activate returned: Registry error: xorg-xproto 7.0.16_0 not registered as installed.
[...]
Error: Status 1 encountered during processing.
Do this
# sudo port clean xorg-xproto
# sudo port install xorg-xproto
And continue install
# sudo port install gkrellm
Now if you have this error :
Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_gnome_gtk-doc/work/gtk-doc-1.11" && ./configure --prefix=/opt/local --with-xml-catalog=/opt/local/etc/xml/catalog " returned error 1
[...]
Error: Status 1 encountered during processing.
Do this
# sudo port clean gtk-doc
# sudo port install gtk-doc
And last
# sudo port install gkrellm
To start gkrellm
# gkrellm
You could use Growl for this purpose. It's possible to send Growl messages from a unix machine by using netgrowl.py, which masquerades as the growlnotify program, but all written in python.
You could then have a process running on the server that monitors the other bits, and posts notifications when limits are exceeded, or whatever.
It would be a hand-coded solution, but we are on Stack Overflow, so programming-related stuff is the go :)
(Oh, and the netgrowl.py page has a few links to similar projects in other languages, if that's your thing, too).
You are propably looking for a more rigid monitoring tool like zabbix. https://zabbix.org