Mercurial: "abandoned transaction found - run hg recover". Recover does not work - windows

Using tortoise hg on windows, I did a pull from a repository on my local drive to a repository on a usb stick.
During the pull i guess there was a glitch in the usb-connection because it got aborted half way through.
Now i can't pull again. I get the message: abandoned transaction found - run hg recover
When i run hg recover i get the following message:
rolling back interrupted transaction
** unknown exception encountered, details follow
** report bug details to http://mercurial.selenic.com/bts/
** or mercurial#selenic.com
** Python 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)]
** Mercurial Distributed SCM (version 1.6.3)
** Extensions loaded: fixfrozenexts
Traceback (most recent call last):
File "hg", line 36, in <module>
File "mercurial\dispatch.pyo", line 16, in run
File "mercurial\dispatch.pyo", line 34, in dispatch
File "mercurial\dispatch.pyo", line 54, in _runcatch
File "mercurial\dispatch.pyo", line 494, in _dispatch
File "mercurial\dispatch.pyo", line 355, in runcommand
File "mercurial\dispatch.pyo", line 545, in _runcommand
File "mercurial\dispatch.pyo", line 499, in checkargs
File "mercurial\dispatch.pyo", line 492, in <lambda>
File "mercurial\util.pyo", line 420, in check
File "mercurial\commands.pyo", line 2869, in recover
File "mercurial\localrepo.pyo", line 606, in recover
File "mercurial\transaction.pyo", line 173, in rollback
ValueError: too many values to unpack
If i try to rollback manually i get this message: no rollback information available
This time i can quite easily just delete the whole backup on my stick and do a fresh pull because the repo is small and the usb-stick does not contain any other changes. But what if this happens on a larger repo where i can't afford to restart from scratch? How can i recover the repo?

I experienced a similar issue and reported it as a bug, and the developer on the report suggested the problem is a corrupt journal. As described in the bug report, you can run hg verify to see the last "good" commit, and use hg clone -r <#> to recover up until that commit.

I think this is actually a bug in the code source. You should report the error to the Mercurial team like said in the error message.

I was using the TeamCity CI and Deployment server, so, probably, this is another issue, but I have posted the answer to the similar question.

Related

Pyusb and Libusb giving NoBackendError on MacOS

I'm on MacOS Big Sur trying to run rfcat. I am running anaconda as well and I have set up an environment with Python2.7 when I originally got errors with Python3.x. I have downloaded the pyusb, pyreadline, ipython, PySide2, and the libusb dependancies. Libusb seems to be giving me the most trouble. I keep getting the following error:
Error in resetup():NoBackendError('No backend available',)
Error in resetup():NoBackendError('No backend available',)
Error in resetup():NoBackendError('No backend available',)
^CTraceback (most recent call last):
File "/opt/anaconda3/envs/rftools/bin/rfcat", line 4, in <module>
__import__('pkg_resources').run_script('rfcat==1.9.5', 'rfcat')
File "/opt/anaconda3/envs/rftools/lib/python2.7/site-packages/pkg_resources/__init__.py", line 666, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/opt/anaconda3/envs/rftools/lib/python2.7/site-packages/pkg_resources/__init__.py", line 1469, in run_script
exec(script_code, namespace, namespace)
File "/opt/anaconda3/envs/rftools/lib/python2.7/site-packages/rfcat-1.9.5-py2.7.egg/EGG-INFO/scripts/rfcat", line 63, in <module>
File "build/bdist.macosx-10.7-x86_64/egg/rflib/__init__.py", line 208, in interactive
File "build/bdist.macosx-10.7-x86_64/egg/rflib/chipcon_nic.py", line 103, in __init__
File "build/bdist.macosx-10.7-x86_64/egg/rflib/chipcon_usb.py", line 93, in __init__
File "build/bdist.macosx-10.7-x86_64/egg/rflib/chipcon_usb.py", line 238, in resetup
KeyboardInterrupt
From my research so far, backend is how pyusb refers to libusb, libusb1 or openusb. It is unable to find the libusb within the environment. I did a little tracking and found that ultimately, the find_library() function is found in ctypes in util.py. It refers to the executable path for MacOS with #executable_path/../lib/libusb%s..... I tried to put libusb into a folder on my executable path to hopefully match this functions search, and still got the same errors. I then found instructions on inputting a custom path for the backend for pyusb here. This appears to be a method where you input the device and backend information at the beginning of your program. The code I inserted is as follows:
import usb.core
import usb.backend.libusb1 as libusb1
backend = libusb1.get_backend(find_library=lambda x: "/path/to/file/lib/libusb-1.0.0.dylib")
dev = usb.core.find(idVendor=“MyVID”, idProduct=“MyPID”, backend=backend)
This induced a similar error but with a different traceback when I placed the code in rflib.init and the rfcat codes:
Traceback (most recent call last):
File "/opt/anaconda3/envs/rftools/bin/rfcat", line 4, in <module>
__import__('pkg_resources').run_script('rfcat==1.9.5', 'rfcat')
File "/opt/anaconda3/envs/rftools/lib/python2.7/site-packages/pkg_resources/__init__.py", line 666, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/opt/anaconda3/envs/rftools/lib/python2.7/site-packages/pkg_resources/__init__.py", line 1469, in run_script
exec(script_code, namespace, namespace)
File "/opt/anaconda3/envs/rftools/lib/python2.7/site-packages/rfcat-1.9.5-py2.7.egg/EGG-INFO/scripts/rfcat", line 12, in <module>
File "build/bdist.macosx-10.7-x86_64/egg/rflib/__init__.py", line 15, in <module>
File "/opt/anaconda3/envs/rftools/lib/python2.7/site-packages/usb/core.py", line 1304, in find
raise NoBackendError('No backend available')
usb.core.NoBackendError: No backend available
I have since reset things back to how I started and am still getting the original error listed above.
I think this largely has to do with the anaconda environment, which I can of course remove. I want to try to find a way to make this work though. Is there a better method to help rfcat find the libusb as required? Another possible solution is resolving the actual executable_path. Does anyone know how to find the executable_path?
In case it helps, I will list the default locations for commands and files.
rfcat: /opt/anaconda3/envs/rftools/lib/python2.7/site-packages/rfcat
pyusb: /opt/anaconda3/envs/rftools/lib/python2.7/site-packages/usb
ctypes: /opt/anaconda3/envs/rftools/lib/python2.7/ctypes
libusb: /opt/anaconda3/envs/rftools/lib/python2.7/site-packages/usb/lib

Server install hdfs client fail

I am getting the following errors for HDFS client installation on Ambari. Have reset the server several times but still cannot get it resolved. Any idea how to fix that?
stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 120, in <module>
HdfsClient().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 36, in install
self.configure(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 41, in configure
hdfs()
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs.py", line 61, in hdfs
group=params.user_group
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/xml_config.py", line 67, in action_create
encoding = self.resource.encoding
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 87, in action_create
raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname))
resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] failed, parent directory /usr/hdp/current/hadoop-client/conf doesn't exist
This is a soft link that link to /etc/hadoop/conf
I run
python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent --skip=users
After run it, it removes /etc/hadoop/conf
However, reinstall does not recreate it.
So you may have to create all conf files by yourself.
Hope someone can patch it.
yum -y erase hdp-select
If you have done installation multiple times, some packages might not be cleaned.
To remove all HDP packages and start with fresh installation, erase hdp-select.
If this is not helping, remove all the versions from /usr/hdp delete this directory if it contains multiple versions of hdp
Remove all the installed packages like hadoop,hdfs,zookeeper etc.
yum remove zookeeper* hadoop* hdp* zookeeper*
I ran into the same problem: I was using HDP 2.3.2 on Centos 7.
The first problem:
Some conf files point to the /etc//conf directory (same as they are supposed to)
However, /etc//conf points back to the other conf directory which leads to an endless loop.
I was able to fix this problem by removing the /etc//conf symbolic links and creating directories
The second problem
If you run the python scripts to clean up the installation and start over however, several directories do not get recreated, such as the hadoop-client directory. This leads to exact your error message. Also this cleanup script does not work out well as it does not clean several users and directories. You have to userdel and groupdel.
UPDATE:
It seems it was a problem of HDP 2.3.2. In HDP 2.3.4, I did not run into that problem any more.
Creating /usr/hdp/current/hadoop-client/conf on failing host should solve the problem.

Exception with git mergetool

I'm trying configure meld for windows.
Just now, I added the python variable environment and the follow steps:
git config --global merge.tool meld
git config --global mergetool.meld.path /c/Users/andarno/Downloads/meld-1.5.2/bin/meld
But when I try to merge the changes the follow message appears:
Normal merge conflict for 'folder/script.js':
{local}: modified file
{remote}: modified file
Hit return to start merge resolution tool (meld):
Traceback (most recent call last):
File "c:/Users/ben/Desktop/Meld/meld/bin/meld", line 98, in <module>
libintl = cdll.intl
File "c:\Python27\lib\ctypes\__init__.py", line 435, in __getattr__
dll = self._dlltype(name)
File "c:\Python27\lib\ctypes\__init__.py", line 365, in __init__
self._handle = _dlopen(self._name, mode)
WindowsError: [Error 126] The specified module could not be found
folder/script.js seems unchanged.
I'm not sure the reason of the error, someone has an idea?

Error with gclient sync while getting the chromium code on Mac OS

I'm trying to setup the chromium code following the documentation on Mac OS X 10.9.2.
I could successfully fetch the code with command:
fetch --nohooks chromium --nosvn=True
but when I try to sync the projects with gclient sync command it's breaking in the middle of the process throwing the following OSError:
________ running '/usr/bin/python src/build/download_nacl_toolchains.py --no-arm-trusted --keep' in '/Volumes/NJHD/google'
Updating /Volumes/NJHD/google/src/native_client/toolchain/.tars/toolchain_mac_x86.tar.bz2
from https://storage.googleapis.com/nativeclient-archive2/x86_toolchain/r12790/toolchain_mac_x86.tar.bz2.
.....................................................................................
|------------------------------------------------|
..................................................Traceback (most recent call last):
File "src/build/download_nacl_toolchains.py", line 63, in <module>
sys.exit(Main(sys.argv[1:]))
File "src/build/download_nacl_toolchains.py", line 58, in Main
download_toolchains.main(args)
File "/Volumes/NJHD/google/src/native_client/build/download_toolchains.py", line 414, in main
keep=options.keep, verbose=options.verbose):
File "/Volumes/NJHD/google/src/native_client/build/download_toolchains.py", line 263, in SyncFlavor
tar.Extract()
File "/Volumes/NJHD/google/src/native_client/build/cygtar.py", line 313, in Extract
self.tar.extract(m)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.py", line 2084, in extract
self._extract_member(tarinfo, os.path.join(path, tarinfo.name))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.py", line 2168, in _extract_member
self.makelink(tarinfo, targetpath)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.py", line 2252, in makelink
os.link(tarinfo._link_target, targetpath)
OSError: [Errno 45] Operation not supported
Error: Command /usr/bin/python src/build/download_nacl_toolchains.py --no-arm-trusted --keep returned non-zero exit status 1 in /Volumes/NJHD/google
Hook '/usr/bin/python src/build/download_nacl_toolchains.py --no-arm-trusted --keep' took 89.91 secs
It seems to me that it is complaining about os.link(tarinfo._link_target, target path), so I tried creating a link using that function which works fine.
Is there any other configuration that I need to take care of?
Thanks in advance!
I placed the chromium project on an external hard disk as you did, and I got the same error.
Perhaps you should try the syncing stuff in your internal drive.
Haven't tried. Hope that help.

Is there any way to run gevent-socketio 0.3.5-rc2 with gunicorn 18.0 without downgrading

I'm running:
gevent==0.13.8
gevent-socketio==0.3.5-rc2
gunicorn==18.0
And have run into the following error:
2013-11-05 06:40:00 [5671] [ERROR] Exception in worker process:
Traceback (most recent call last):
File "/home/vagrant/server/lib/python2.7/site-packages/gunicorn/arbiter.py", line 495, in spawn_worker
worker.init_process()
File "/home/vagrant/server/lib/python2.7/site-packages/gunicorn/workers/ggevent.py", line 165, in init_process
super(GeventWorker, self).init_process()
File "/home/vagrant/server/lib/python2.7/site-packages/gunicorn/workers/base.py", line 112, in init_process
self.run()
File "/home/vagrant/server/lib/python2.7/site-packages/socketio/sgunicorn.py", line 14, in run
self.socket.setblocking(1)
AttributeError: 'GeventSocketIOWorker' object has no attribute 'socket'
A previous stack overflow question has the solution "downgrade to version 16.0"
GeventSocketIOWorker has no attribute 'socket'
However I'm reluctant to do this because additions in v18.0 are really useful to me.
I'm asking here because I'm not sure if there's an easy solution that I'm missing. If not I imagine I'll need to raise a ticket for gunicorn?
It was a version thing.
gevent-socketio version 0.3.5-rc2 was uploaded to Pypi in July 2012. The fix for this issue came out in Jan 2013.
I solved it by using the master branch from the gevent-socketio repository on GitHub. To do this, change the line for gevent-socketio in requirements.txt to
-e git+git#github.com:abourget/gevent-socketio.git#egg=gevent_socketio

Resources