I am writing a bash script for the public (and myself) to set up a Puppet / TheForeman server real quick on a Ubuntu 14.04 LTS server, which is based on this HOWTO. I have done nothing on the part of DNS.
ping $(hostname -f)
PING foreman.test.local (192.168.1.2) 56(84) bytes of data.
64 bytes from foreman.test.local (192.168.1.2): icmp_seq=1 ttl=64 time=0.035 ms
64 bytes from foreman.test.local (192.168.1.2): icmp_seq=2 ttl=64 time=0.049 ms
64 bytes from foreman.test.local (192.168.1.2): icmp_seq=3 ttl=64 time=0.054 ms
64 bytes from foreman.test.local (192.168.1.2): icmp_seq=4 ttl=64 time=0.054 ms
After running the bash script I expect the test to give good results, but instead I get an error message:
sudo puppet agent --test
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: Error 400 on SERVER: Failed to find foreman.test.local via exec: Execution of '/etc/puppet/node.rb foreman.test.local' returned 1:
Info: Retrieving plugin
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Failed when searching for node foreman.test.local: Failed to find foreman.test.local via exec: Execution of '/etc/puppet/node.rb foreman.test.local' returned 1:
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
When you read it seems like an option to choose
sudo apt-get install puppetmaster -y
but that is the problem and doesn't work in this case.
So in this case it should be
wget apt.puppetlabs.com/puppetlabs-release-trusty.deb
and
dpkg -i puppetlabs-release-trusty.deb
instead!
Related
I am on Windows 11 and using Ubuntu 20.04.5 . I recently (3 days ago) updated my Windows, which I think could be related to this problem but also might not be. Specifically, I am suspicious of the update labelled
Windows Subsystem for Linux WSLg Preview - 1.0.27
Which I installed 3 days ago.
This is my own personal computer.
Pip: pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8)
This is my first question and I am mostly just the type of person who wants to use pip and python and not worry about it. So if I forgot important background/version info just tell me how to get it and I will try to add it.
Anytime I try to install a package with pip, I get:
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=120)")': /simple/pytorch/
Including if I use
--default-timeout=1000
Then it just waits 1000s of my time instead of 15. My internet speed seems fine for everything else.
Here is what I have tried so far by reading everything I could find after googling the error (nothing changed at all):
Uninstalling and reinstalling pip
Disabling IPv6
Disabling as well as completely uninstalling the VPN i had
Restarting computer
Restarting Router (including hard reset)
Restarting Modem
Disabling Windows firewall (temporarily just to see if it would fix it, reenabled afterwards).
Clearing display variable
Unset DISPLAY
and including in pip install
--no-cache-dir
Updating things
sudo apt update && sudo apt upgrade
If I try
sudo pip install pytorch -vvv
Then I get
Non-user install because site-packages writeable
Created temporary directory: /tmp/pip-ephem-wheel-cache-b2g02nuo
Created temporary directory: /tmp/pip-req-tracker-5hiikgit
Initialized build tracking at /tmp/pip-req-tracker-5hiikgit
Created build tracker: /tmp/pip-req-tracker-5hiikgit
Entered build tracker: /tmp/pip-req-tracker-5hiikgit
Created temporary directory: /tmp/pip-install-j_2ikmsi
1 location(s) to search for versions of pytorch:
* https://pypi.org/simple/pytorch/
Fetching project page and analyzing links: https://pypi.org/simple/pytorch/
Getting page https://pypi.org/simple/pytorch/
Found index url https://pypi.org/simple
Getting credentials from keyring for https://pypi.org/simple
Getting credentials from keyring for pypi.org
Looking up "https://pypi.org/simple/pytorch/" in the cache
Request header has "max_age" as 0, cache bypassed
Starting new HTTPS connection (1): pypi.org:443
It hangs there until hitting the timeout.
If I try:
curl https://pypi.python.org/simple/ -v
Then I get
* Trying 151.101.188.223:443...
* TCP_NODELAY set
* Connected to pypi.python.org (151.101.188.223) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* Operation timed out after 300157 milliseconds with 0 out of 0 bytes received
* Closing connection 0
curl: (28) Operation timed out after 300157 milliseconds with 0 out of 0 bytes received
If I try
ping pypi.org
Then I get...
PING pypi.org (151.101.128.223) 56(84) bytes of data.
64 bytes from 151.101.128.223 (151.101.128.223): icmp_seq=1 ttl=55 time=10.5 ms
64 bytes from 151.101.128.223 (151.101.128.223): icmp_seq=2 ttl=55 time=11.7 ms
64 bytes from 151.101.128.223 (151.101.128.223): icmp_seq=3 ttl=55 time=116 ms
64 bytes from 151.101.128.223 (151.101.128.223): icmp_seq=4 ttl=55 time=11.2 ms
64 bytes from 151.101.128.223 (151.101.128.223): icmp_seq=5 ttl=55 time=11.8 ms
I am trying to set up the hadoop environment following the instructions listed in the book Hadoop For Dummies on a virtual machine.
One of step indicates the following code -
yum install hadoop\* mahout\* oozie\* hbase\* pig\* hive\* zookeeper\* hue\*
When I run that I get the following error -
[root#localhost Desktop]# yum install hadoop\*
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
* base: centos.mirror.crucial.com.au
* extras: centos.mirror.crucial.com.au
* updates: centos.mirror.nsw.au.glovine.com.au
base | 3.7 kB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
Setting up Install Process
No package hadoop* available.
Error: Nothing to do
Among all hadoop, zookeeper and hue I got the error saying the package not found. I looked into those mirror sites and I do see that hadoop is not present. Is there any way to force the mirror to some other location?
Edit -
As pointed out below I did try the command to get the repo with the following command -
wget -O /etc/yum.repos.d/bigtop.repo http://archive.apache.org/dist/bigtop/bigtop-1.0.0/repos/centos6/bigtop.repo
which is throwing following Connection Refused error -
[root#localhost Desktop]# wget -O /etc/yum.repos.d/bigtop.repo http://www.apache.org/dist/bigtop/bigtop-1.0.0/repos/centos6/bigtop.repo
--2015-12-30 05:03:09-- http://www.apache.org/dist/bigtop/bigtop-1.0.0/repos/centos6/bigtop.repo
Resolving www.apache.org... 88.198.26.2, 140.211.11.105, 2a01:4f8:130:2192::2
Connecting to www.apache.org|88.198.26.2|:80... failed: Connection refused.
Connecting to www.apache.org|140.211.11.105|:80... failed: Connection refused.
Connecting to www.apache.org|2a01:4f8:130:2192::2|:80... failed: Network is unreachable.
Likewise I did try CDH one-click install as pointed out by user1862493 and I am getting the following error
[root#localhost Desktop]# wget https://archive.cloudera.com/cdh5/one-click-install/redhat/6/x86_64/cloudera-cdh-5-0.x86_64.rpm
--2015-12-30 05:07:49-- https://archive.cloudera.com/cdh5/one-click-install/redhat/6/x86_64/cloudera-cdh-5-0.x86_64.rpm
Resolving archive.cloudera.com... 23.235.41.167
Connecting to archive.cloudera.com|23.235.41.167|:443... failed: Connection refused.
yum update worked fine and so is internet within the VM, any help?
You need to add the repository first.
wget https://archive.cloudera.com/cdh5/one-click-install/redhat/6/x86_64/cloudera-cdh-5-0.x86_64.rpm
yum --nogpgcheck localinstall cloudera-cdh-5-0.x86_64.rpm
yum clean all
Then try to install hadoop components.
Ref http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/cdh_ig_cdh5_install.html#topic_4_4_1_unique_2__p_31_unique_2
I'm sure there is a simple explanation but I cannot seem to figure it out. I have a Centos server which needs to do a daily FTP upload of a database to external back-up provided by a QNAP NAS. The server also puts a copy of the DB on to a second Centos server. The file is >800MB and growing.
I have a script which handles the FTP put of the file to the second server and this is called by crontab daily and works every time.
I have an almost identical script also called by crontab for the FTP to the QNAP and it always truncates the file at exactly 150114776 bytes. Strangely, if I run this same script from the CLI it always works perfectly delivering the entire file to the QNAP which suggests that there is no QNAP limit on filesize coming in to play.
The problem is consistent. Invoke the transfer with crontab and the file is truncated. Invoke with CLI and the whole file is transferred. No error is ever reported; FTP thinks it has done the whole job.
Sample log of transfer by crontab:
Connected to 172.172.1.1 (172.172.1.1).
220 NASFTPD Turbo station 1.3.4e Server (ProFTPD) [::ffff:172.172.1.1]
Remote system type is UNIX.
Using binary mode to transfer files.
331 Password required for fred
230 User fred logged in
250 CWD command successful
local: DATA_bk.sql.1.gz remote: DATA_bk_20150811_071501.sql.gz
227 Entering Passive Mode (172,172,1.1,217,232).
150 Opening BINARY mode data connection for DATA_bk_20150811_071501.sql.gz
226 Transfer complete
150114776 bytes sent in 23 secs (6.4e+03 Kbytes/sec)
221 Goodbye.
And a manual invocation:
Connected to 172.172.1.1 (172.172.1.1).
220 NASFTPD Turbo station 1.3.4e Server (ProFTPD) [::ffff:172.172.1.1]
Remote system type is UNIX.
Using binary mode to transfer files.
331 Password required for fred
230 User fred logged in
250 CWD command successful
local: DATA_bk.sql.1.gz remote: DATA_bk_20150811_120117.sql.gz
227 Entering Passive Mode (172,172,1.1,217,189).
150 Opening BINARY mode data connection for DATA_bk_20150811_120117.sql.gz
226 Transfer complete
879067272 bytes sent in 182 secs (4.5e+03 Kbytes/sec)
221 Goodbye.
Can anyone point me to some rule I've overlooked or suggest a way to debug this?
Thanks
It turns out I made a simple error. The cron tab was executing in the wrong directory where there happened to be an old copy of the source file which just happened to be 150114776 bytes big. Some times, the simplest causes are the toughest to see.
Data transfer to the QNAP now works perfectly every time.
We have the same problem. We opened a ticket with QNAP and this is the reply:
This a known issue (bug) what you can do is downgrade to the former firmware version or wait to the next one
is released where hopefully this issue is solved.
Sorry for the inconvenience.
So.... downgrade or wait...
I am using Netbeans to work on my php server. When I try to upload my project, there are no errors, but the files remain unchanged on the server.
Example: I change height from 10% to 7% in css and upload. No errors occur, but height remains unchanged on the site.
Log (Octothorpes used to remove sensitive info):
220 ProFTPD 1.3.5 Server (Debian) [::ffff:###.###.#.##]
USER ########
331 Password required for ########
PASS ******
230 User ######## logged in
TYPE I
200 Type set to I
CWD /var/www/html/www.example.com
250 CWD command successful
PWD
257 "/var/www/html/www.example.com" is the current directory
CWD /var/www/html/www.example.com/styles/css
250 CWD command successful
CWD /var/www/html/www.example.com/styles/css
250 CWD command successful
PASV
227 Entering Passive Mode (##,###,##,##,###,#).
STOR main.css.new
150 Opening BINARY mode data connection for main.css.new
226 Transfer complete
RNFR main.css.new
350 File or directory exists, ready for destination name
RNTO main.css
250 Rename successful
CWD /var/www/html/www.example.com/styles
250 CWD command successful
QUIT
221 Goodbye.
Summary
====================
Succeeded:
dir styles
dir styles/css
file styles/css/main.css
Runtime: 19 ms, processed: 1 file(s), 1.34 KB
Extra info:
Client running windows 8.1 64 bit
Server running Ubuntu 64 bit
Server is on local network
FTP credentials are correct
Connection worked client-side previously on Windows 7 64 bit and Ubuntu 64 bit
If you don't disable caching on your browser it will store old versions of files (css, js, jpg, etc) and you won't see your changes. Disable caching and it should work.
Once my machine is VLAN enabled, I'm neither able to prepare new scripts using JMeter-2.9 tool nor able to execute my old scripts which I used to run earlier on the same machine.
Please find below error message I got while running the old scripts:
*Thread Name: 46_Drug Issue 1-1
Sample Start: 2014-11-19 16:22:40 IST
Load time: 1001
Latency: 0
Size in bytes: 1720
Headers size in bytes: 0
Body size in bytes: 1720
Sample Count: 1
Error Count: 1
Response code: Non HTTP response code: java.net.ConnectException
Response message: Non HTTP response message: Connection refused: connect
Response headers:
HTTPSampleResult fields:
ContentType:
DataEncoding: null*
While recording a new test plan in my Windows machine, I'm able to navigate the different pages with HTTP Proxy server enabled in JMeter tool, but no HTTP request is getting recorded in the transaction controller.
Can anyone please suggest, how to overcome this issue ?
According to me issue is related to proxy.
Jmeter sits between your machine and proxy and thats how it records all the requests coming to and going from your machine. It doesnt matter you are in VLAN or WAN if your proxy settings are correct.
Please check your proxy settings once put apply similar proxy settings (Localhost,8080) in VLAN or else you can provide seperate proxy for your Jmeter by starting it with JMeter parameters,
Jmeter.bat -H <hostname> -P <port> -u <username> -a <pwd>