I'm setting up a honeypot for my boss, and I'm coming across an issue with actually getting the time to synchronize with my workstations time (the reason I want to achieve this is because before looking at the steps on the link below, I had NOOBS rasbian OS installed which had the same issue with not being able to clone, but after doing the following command sudo apt-get install ntp, I was able to clone the files into the system with no issues, but because the link below calls for the "Rasbian Stretch Lite OS", I had to re-do the process, and because of this I can't seem to get the time to sync anymore.
https://github.com/DShield-ISC/dshield
So when I attempt to do the following command in the steps:
git clone https://github.com/DShield-ISC/dshield.git
fatal: unable to access 'https://github.com/DShield-ISC/dshield.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
I've Tried the following methods with no luck:
sudo /etc/init.d/ntp stop
sudo raspi-config (setting timezone)
sudo /etc/init.d/ntp start
the timedatectl settings are as follows:
Local time: Mon 2016-02-04 12:04:52 PST
Universal time: Mon 2016-02-04 20:04:52 UTC
RTC time: n/a
Time zone: America/Los_Angeles (PST, -0800)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
Also i've tried..
sudo ntpd -q -g
I've noticed with this command I get a ton of results, and the process never finishes, if this is vital I can re-run the command and tell you what kind of information is coming back to me.
Yes I've set the time to be as close as possible to the actual clock before attempting any of these, I've noticed that regardless it's always either a minute or some seconds off, whenever rebooting. I'm assuming that's because it isn't synchronized even though it states that it is.
The Cert error was due to me being HARDWIRED into my RBI verses using wifi, after going into sudo raspi-config, and setting up a wifi connection I was able to successfully clone the github repo.
Related
In the last couple of days I've been unsuccessfully trying to clone our huge SVN repository to GIT.
All the time, sooner or later, I'm running into the following error:
Software caused connection to abort: Error running context: Software caused connection abort at: C:/Program Files/Git/mingw64/share/perl5/Git/SVN/Ra.pn line 312.
I couldn't find any log entry on my Windows 10 client nor on the Ubuntu server giving details on the reason for this error.
StackOverflow question #53157918 suggested to increase the Apache server timeout value. I increased the Apache timeout value to 10 times the original timeout value - but, apparently, this didn't help.
Following the StdOut output, reading each of the files is a snap, so I don't suggest it's a transmission timeout issue, anyway.
Edit
I just tried again ... This time the error is Out of Memory:
libsvn: Out of memory - terminating application.
1 [main] perl 735 cygwin_exception::open_stackdumpfile: Dumping stack trace to perl.exe.stackdump
As the workaround you can kick-off Ubuntu virtual machine and import your repository there. Here is what I did.
Download and install Oracle Virtual Box.
Download and use Ubuntu iso as your OS.
Run terminal and install git and git-svn
sudo apt-get install git git-svn
Follow your procedure to checkout. Using this method I've successfully downloaded big SVN repo into git and transferred it to Windows.
As you encountered the out of memory error, it would be a good idea to increase the git limit and window size and cache, etc. Probably this is the solution:
cd /migrated/git/repo/.git the directory created after git svn clone command
edit .git/config file as below:
under [core] section add lines
packedGitLimit = 512m
packedGitWindowSize = 512m
longpaths = true
and also add following sections:
[http]
postBuffer = 100000000
[pack]
deltaCacheSize = 256m
packSizeLimit = 256m
windowMemory = 1024m
$ time sudo dbus-daemon --system
real 1m30.111s
user 0m0.017s
sys 0m0.003s
Barebone ArchLinux inside docker on ArchLinux.
D-Bus Message Bus Daemon 1.12.16
Tried dbus-x11 from AUR, same. Every time.
Edit/Details: the sudo invocation above takes 1:30 to execute, but the actual dbus-daemon process is spawned right away, and continues to run during and after the 1:30, successfully (i.e. it works). Reason I need dbus-daemon? for avahi-daemon (more specifically, to be able to run avahi-browse --all and discover stuff on my network).
Edit2: seems even though 'everything works' despite this slowness (avahi, network service discovery etc), the container becomes dead slow. Barely running sudo echo 'something' takes 25 seconds (a figure perhaps related to a timeout of 25000 inside /usr/share/dbus-1/system.conf). Just like an infection. For what it's worth, after reading more, seems the frustration of needing dbus is not restricted to the world of containerization - plenty of articles/communities like this and this.
I hit this issue with various Docker images, but not always. I digged deeper in this issue and found an interessting comment on the systemd repo.
The images I'm currently working on had systemd configured on passwd and group:
$ cat /etc/nsswitch.conf
passwd: sss files systemd
group: sss files systemd
Then I removed the systemd provider ($ sed -i 's/ systemd//g' /etc/nsswitch.conf) and the 90s hang was gone when starting with dbus-daemon --system --nofork. This was really a PITA to find out.
I could also verify that exactly this was the issue/difference for another Docker image I'm maintaining.
I have installed docker on windows and successfully brought up the bash shell window. However, when I test my installation with docker run hello-world I get the following:
Post http://127.0.0.1:2375/v1.20/containers/create: dial tcp 127.0.0.1:2375: ConnectEx tcp: No connection could be made because the target machine actively refused it..
* Are you trying to connect to a TLS-enabled daemon without TLS?
* Is your docker daemon up and running?
I thought at first it was because I needed to be logged in to docker hub. When I tried docker login and gave it my docker-hub account name, I got
The handle is invalid.
BTW, it did not ask me for my password.
I am puzzled. Please advise.
A little more troubleshooting helped resolve the problem. Steps taken:
I ran the new program Kitematic. It complained that it could not run the VM and offered a remove-and-setup-again option.
I chose the remove-and-setup-again option.
I then ran Kitematic again and it prompted for my dockerhub credentials
Once I successfully entered those and Kitematic seemed healthy I tried the Quickstart terminal again.
Running that provoked some checks from my anti-virus software which wanted to block internet activity from the VM. Once I overrode that, all went well.
In conclusion, it seems that retrying an install does change things (I do not know why) and secondly, anti-virus software can be a bother.
I've read several other 'git hangs on clone' questions, but none match my environment and details. I'm using git built under cygwin (msys git is not an option) to clone a repo from a Linux host over SSH.
git clone user#host:repo
I've tested against the same host on other platforms, and it works fine, but on this Windows machine the clone hangs indefinitely. I set GIT_TRACE=1 and it looks like the problem is with this command:
'ssh' 'user#host' 'git-upload-pack '\''repo'\'''
My SSH keys are set up correctly: ssh user#host works fine. When I run the command, I get a bunch of output that ends like this:
...
003dbbd3db63763922ad75bbeefa3811dce001576851 refs/tags/start
0000
Then it hangs for 20+ minutes, which is the longest I've waited before killing it.
The server has Git 1.7.11.7 with OpenSSH 5.9p1, while the client has Git 1.7.9 with OpenSSH 6.1p1.
Is that supposed to be the end of the git-upload-pack output? Is this a bug in Git or my configuration?
The upcoming git1.8.5 (Q4 2013) will document more the smart http protocol.
See commit 4c6fffe2ae3642fa4576c704e2eb443de1d0f8a1 by Shawn O. Pearce.
With that detailed documentation, the idea would be to monitor the web requests done between your git client and the server, and see if those conforms to what is documented below.
That could help in pinpointing where the service "hangs".
The file Documentation/technical/http-protocol.txt insists on:
the "Smart Service git-upload-pack"
Clients MUST first perform ref discovery with '$GIT_URL/info/refs?service=git-upload-pack'.
C: POST $GIT_URL/git-upload-pack HTTP/1.0
S: 200 OK
S: Content-Type: application/x-git-upload-pack-result
S: Cache-Control: no-cache
S:
S: ....ACK %s, continue
S: ....NAK
Clients MUST NOT reuse or revalidate a cached reponse.
Servers MUST include sufficient Cache-Control headers
to prevent caching of the response.
Servers SHOULD support all capabilities defined here.
Clients MUST send at least one 'want' command in the request body.
Clients MUST NOT reference an id in a 'want' command which did not appear in the response obtained through ref discovery unless the server advertises capability "allow-tip-sha1-in-want".
The "negociation" algorithm
(c) Send one $GIT_URL/git-upload-pack request:
C: 0032want <WANT #1>...............................
This worked for me, incase it helps someone else.
Check your git remote url. It might hang with git-upload-pack on a trace if your using the wrong url type. You change the url from ssh git#github.com: to https://github.com/ on your remote.
We have faced a similar issue - and we attributed it to the following: Our git repo has a LOT of binary files checked in (multiple versions, over the past 1.5 years of this project). So, we assumed that this was the cause.
To support this theory, we have other code bases that are more recent (and thus do not have so many binary files and their versions) - which do not exhibit this behavior.
Our setup: Git setup on linux, site-to-site VPN between London and India over a T1 line.
I was having this same problem after I added some jazz like this to my ssh config in order to set window titles in tmux:
Host *
PermitLocalCommand yes
LocalCommand if [[ $TERM == screen* ]]; then printf "\033k%h\033\\"; fi
getting rid of that fixed my git.
An outdated PuTTy can also cause this. Your system might be using plink.exe as GIT_SSH.
You can install the latest development build from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html to make sure this is not the problem.
My problem was simple. I updated the VPN client and git started hanging. I quit the VPN client and restarted it.
I am trying to build and install membase from source tarball. The steps I followed are:
Un-archive the tar membase-server_src-1.7.1.1.tar.gz
Issue make (from within the untarred folder)
Once done, I enter into directory install/bin and invoke the script membase-server.
This starts up the server with a message:
The maximum number of open files for the membase user is set too low.
It must be at least 10240. Normally this can be increased by adding
the following lines to /etc/security/limits.conf:
Tried updating limits.conf as suggested, but no luck it continues to pop up the same message and continues booting
Given that the server is started I tried accessing memcached over port 11211, but I get a connection refused message. Then figured out (netstat) that memcached is listening to 11210 and tried telneting to port 11210, unfortunately the connection is closed as soon as I issue the following commands
stats
set myvar 0 0 5
Note: I am not getting any output from the commands above {Yes: stats did not show anything but still I issued set.}
Could somebody help me build and install membase from source? Also why is memcached listening to 11210 instead of 11211?
It would be great if somebody could also give me a step-by-step guide which I can follow to build from source from Git repository (I have not used autoconf earlier).
P.S: I have tried installing from binaries (debian package) on the same machines and I am able to successfully install and telnet. Hence not sure why is build from source not working.
You can increase the number of file descriptors on your machine by using the ulimit command. Try doing (you might need to use sudo as well):
ulimit -n 10240
I personally have this set in my .bash_rc so that whenever I start my terminal it is always set for me.
Also, memcached listens on port 11210 by default for Membase. This is done because Moxi, the memcached proxy server, listens on port 11211. I'm also pretty sure that the memcached version used for Membase only listens for the binary protocol so you won't be able to successfully telnet to 11210 and have commands work correctly. Telneting to 11211 (moxi) should work though.