I really don't understand here when I entered vagrant ssh for the first time in the terminal. I'm using laravel Homestead.
output was:
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-30-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information disabled due to load higher than 1.0
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Fri Oct 3 01:24:38 2014 from 10.0.2.2
I really don't understand:
Last login: Fri Oct 3 01:24:38 2014 from 10.0.2.2 //because its first time I entered vagrant ssh
and this:
System information disabled due to load higher than 1.0
The last login you see may come from when the Vagrant Box was originally created. For example I just launched a new Vagrant VM based on hashicorp/precise64 and the "last login" showed as September 2012. Subsequent logins will show the last time that you logged in.
Regarding the system information being disabled, see this ServerFault question: System information disabled due to load higher than 1.0 amazon ec2.
Both of the above can safely be considered normal behavior.
Related
I am following the CoreOS in Action book (and also CoreOS online instruction) to bring up a 3-node cluster using Vagrant and VirtualBox on MacOS.
It all goes fine, machines come up & running and I can ssh into one of them, but it looks like the boxes brought up are missing fleetctl (which makes no sense, as it's such a core component of CoreOS):
$ vagrant ssh core-01 -- -A
Last login: Thu Mar 1 21:28:58 UTC 2018 from 10.0.2.2 on pts/0
Container Linux by CoreOS alpha (1702.0.0)
core#core-01 ~ $ fleetctl list-machines
-bash: fleetctl: command not found
core#core-01 ~ $ which fleetctl
which: no fleetctl in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bin)
What am I doing wrong?
I have changed the number of instances to 3, created a new "discovery token URL" and updated the user.data file; Googling around I seem to be the one and only person having this problem.
Thanks in advance for any suggestions you may have!
PS -- yes, I have tried (several times!) to vagrant destroy and rebuild the cluster: even nuked the repo and re-cloned it. Same issue every time.
The answer is going to make you a bit sad, here it is:
CoreOS no longer support fleet. It's gone. Ciao :(
https://coreos.com/blog/migrating-from-fleet-to-kubernetes.html
To this end, CoreOS will remove fleet from Container Linux on February 1, 2018, and support for fleet will end at that time. fleet has already been in maintenance mode for some time, receiving only security and bugfix updates, and this move reflects our focus on Kubernetes and Tectonic for cluster orchestration and management.
You are using Coreos 1702.0.0, fleet has been removed since Coreos 1675.0.1 https://coreos.com/releases/
I've found out the process [sync_supers] running twice, using 100% of cpu each of those.
It was triggered by the user share which is a user to access a share folder used by Samba users. The user share has access only to /home/share.
lucas#arturito:~$ cat /etc/passwd | grep share
share:x:1002:1002:Share,,,:/home/share:/bin/bash
tomcat7:x:115:125::/usr/share/tomcat7:/bin/false
I've never seen that process before and as per the stats I got from Munin, it's been running for an hour or so.
I've found out the process [sync_supers] running twice, using 100% of cpu each of those.
capture of htop
It was triggered by the user share which is a user to access a share folder used by Samba users. The user share has access only to /home/share.
lucas#arturito:~$ cat /etc/passwd | grep share
share:x:1002:1002:Share,,,:/home/share:/bin/bash
tomcat7:x:115:125::/usr/share/tomcat7:/bin/false
I've never seen that process before and as per the stats I got from Munin, it's been running for an hour or so.
munin stats
What's the sync_supers process? Is my box compromised?
I've ran chkrootkit, rkhunter and debsums and everything seems to be ok ...
I'm running:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.5 LTS
Release: 14.04
Codename: trusty
Linux arturito 3.13.0-100-generic #147-Ubuntu SMP Tue Oct 18 16:48:51 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
I killed both the processes and they went away.
I'm kind of concerned about this. Is there anything else that I should do/check?
Thanks!
Lucas
I am running a jmeter test with one master and two slave systems.
the values I provided in master system are:
no of threads: 750
ramp up: 420 seconds
loop count: 1
when I press ctrl+shift+R, the test execution begins on both "A" & "B" remote systems and the message
"Starting the test on host XXX.XXX.X.XXX # Mon Feb 8 08:08:21 IST 2016
"
is displayed on cmd prompt of both systems.
But after sometime I found that there is no response from server. I checked if there is any activity in the "summary listener", but there is no activity.
I checked the generated "summary.xlsx" file and found all the requests from system "A" have been served and only some of the requests from system "B" were served.
When I checked system A's cmd prompt it says
"Finished the test on host XXX.XXX.X.XXX # Mon Feb 8 08:08:21 IST 2016
".
(I think it is ok, because all its requests were served).
When I checked system B's cmd prompt I DIDN'T find the message
"Finished the test on host XXX.XXX.X.XXX # Mon Feb 8 08:08:21 IST 2016
".
Hoping that the requests of system B would be executed eventually, I left it for 8 hours.
But to my surprise when I checked it in the morning it was just, where I have last seen it.
No further requests from system B were executed, checked the server log no response there either. And I also didn't find the message
"Finished the test on host XXX.XXX.X.XXX # Mon Feb 8 08:08:21 IST 2016
"
on system B.
Please suggest me how I can get all the requests from both slave systems served without the above problem.
I can bet that the issue is in different subnets. Read the following step by step manual, especially limitations section:
RMI cannot communicate across subnets without a proxy; therefore neither can jmeter without a proxy.
So, make sure that both A and B are in the same subnet with master.
I assume that you are able to run a standalone/non-distributed test in Slave B w/o issues. If you have not checked that, please ensure if it works fine.
In this case, read this site. https://cloud.google.com/compute/docs/tutorials/how-to-configure-ssh-port-forwarding-set-up-load-testing-on-compute-engine/. It has good information on the jmeter communication during distributed testing.
I would check if the RMI ports on slave B are open.
Over the weekend I enabled FileVault 2 on OS 10.8.4.
When I fired up MAMP PRO just now, I receive the following error:
Apache wasn't able to start. Please check log for more information.
When I look at my log, the last line is:
Fri Jun 21 18:57:52 2013] [notice] caught SIGTERM, shutting down
I have tried stopping and restarting Apache.
If I run: sudo apachectl start, and then try to start MAMP, I receive a different error message:
The built in Apache is active which can cause a port conflict with at least one of your virtual hosts.
It's recommended either to choose a port different than 80 or to stop the built in Apache.
Enabling FileVault is the only thing I can think of since Friday that could have potentially affected Apache. But I'm not sure how to debug what exactly the problem is?
I disabled FileVault, did a system reboot, and everything is back to normal. So apparently there are some issues with running Mac Apache and FileVault. Not much has been said about it online. Hopefully this thread can serve some purpose and we can figure out a solution.
I'm been using JMeter on a linux box, under the command line for a little bit. works fine.
Today, I tried it on a windows box (new client, etc) and it does work but the OUTPUT is waay different, in the console window.
The linux version dumps to the console a running commentary of what is going on -> Min/Max/Throughput/Error messages, etc. etc.
On windows, there's non of that.. eg..
C:\Users\Administrator>c:\temp\jakarta-jmeter-2.3.4\bin\jmeter -n -t "C:\Users\A
dministrator\Desktop\JMeter Test Files\MyProject.jmx" -Dthrea
ds=10 -Dloop=10 -Drampup=1
Created the tree successfully using C:\Users\Administrator\Desktop\JMeter Test Files\MyProjectjmx
Starting the test # Fri Oct 23 21:08:37 PDT 2009 (1256357317843)
Waiting for possible shutdown message on port 4445
Tidying up ... # Fri Oct 23 21:09:09 PDT 2009 (1256357349008)
... end of run
Is there a setting i need to set? something i'm missing from the configuration file?
Note: Please don't tell me to stick with the Linux version - lets keep any religious wars out of this discussion.
Basically when you are running your script from windows, you are not mentioning where your output should store. Check if you are using any listeners and storing the results somewhere on the windows box.
You can add one more parameter as - -l in the command line to get the help