I need to use SVN on Windows and would like to set it up such that line endings are always checked out in Windows style and always committed in Unix style, regardless which repository is used.
Is this possible or will I have to write a configuration file for each repo?
If I have to write a config file to each repo, where do I put it, what's the right filename, what goes into the file and what effect will it have on other users of the repo?
I made the following changes to the global SVN config file but it had no effect. What else do I need to do? Or will I have to call SVN with some specific parameters for changes to take effect on Windows?
enable-auto-props = yes
[auto-props]
*.c = svn:eol-style=native
*.cpp = svn:eol-style=native
*.cxx = svn:eol-style=native
*.h = svn:eol-style=native
*.hpp = svn:eol-style=native
*.hxx = svn:eol-style=native
*.txt = svn:eol-style=native
*.tex = svn:eol-style=native
*.bib = svn:eol-style=native
You need to set the svn:eol-style property to native for all files in the repository. Autoprops will set the property for files newly added to a repository, but for existing files, you need to add the property manually:
svn propset svn:eol-style native example.c
svn commit
Daniel Roethisberger already gave you the correct answer. However you might want someway to ensure that all files have svn:eol-style set to `native.
I have a pre-commit hook that can ensure that the svn:eol-style property is set to native on all relevant files before it will allow a commit to take place. You might want to take a look at it to ensure that your policy is followed.
This command line android tool worked for me:
cd <install_dir>/adt-bundle-mac-x86_64-20140702/sdk/tools
./android list target
Make sure you pick a target listed above for the --target switch below.
./android create project --target 1 --name MyFirstApp \
--path ~/projects/android/MyFirstApp --activity MainActivity \
--package com.example.myfirstapp
This is documented at https://developer.android.com/training/basics/firstapp/creating-project.html
Then open eclipse and import the project created above.
Related
Append Problem
I am trying to read rules from a file and in a bash script set the rules, for this to work I need to append the svn:ignore rules to the directory.
I have an example set of data:
/js/blank.html
/js/index.php
/js/spacer.gif
If I try to run svn propedit svn:ignore js/ < "blank.html" or echo "test" | svn propedit svn:ignore js/ I get the following error:
Vim: Warning: Input is not from a terminal
Vim: Error reading input, exiting...
Vim: preserving files...
Vim: Finished.
svn: E200012: system('/usr/bin/editor svn-prop.tmp') returned 256
Is it possible to append rules to svn:ignore?
Alternatives
I know you can use propset to set a list of rules, one per line as per Ignore multiple specific files with svn but that is not the behaviour I am looking for as I would have to order my list in bash some how and make sure I do not overwrite any existing changes.
Interestingly this came up in 2005 but there was no outcome, maybe I should track him down.
If anyone knows how to use propset to append that would be useful too.
Thanks
proplist
I am automating these additions from a file, so I want to ignore all of these apart from local.xml.sample
ls -h errors/
404.php default enterprise processor.php
503.php design.xml local.xml.sample report.php
My propedit rules:
.htaccess
404.php
503.php
design.php
processor.php
report.php
design.xml
Proplist output:
svn proplist errors
Properties on 'errors':
svn:ignore
There are more complex examples I can give, but the baseline is that I am trying to automate all rules from a single file to create a reliable way of ignore all core files of a software package from my repository. I know I am using the wrong tool for the job, but management is management, I feel like I am hitting a nail with a piece of paper.
I have some large data files that need to be copied from source folders to build folders during our Qmake/QtCreator build. Since they are large, I only want the copy to happen for new/changed files. And I'd really like to avoid listing them all specifically in the project file. Here's what I've tried:
This attempt at copying data files fails because the DemoData folder is the target. Therefore the copy is not performed if files within the folder are added or changed. Only if the folder does not exist.
DemoData.commands = $$COPY_CMD $${SRC_DATA_DIR}DemoData $${BLD_DATA_DIR}DemoData
DemoData.target += $${BLD_DATA_DIR}DemoData
PRE_TARGETDEPS += $${BLD_DATA_DIR}DemoData
QMAKE_EXTRA_TARGETS += DemoData
This approach fails because the DemoData.target item is not expected to have a list of multiple items. QMake puts the list in quotes in the generated makefile so it becomes one target.
DemoData.commands = $$COPY_CMD $${SRC_DATA_DIR}DemoData $${BLD_DATA_DIR}DemoData
DEMO_DATA_FILES = $$files($${SRC_DATA_DIR}DemoData/*)
for(FILE, DEMO_DATA_FILES){
DemoData.target += $${BLD_DATA_DIR}DemoData\\$$basename(FILE)
PRE_TARGETDEPS += $${BLD_DATA_DIR}DemoData\\$$basename(FILE)
}
QMAKE_EXTRA_TARGETS += DemoData
This attempt fails because (AFAICT) QMake does not support variable names contained in other variables. It seems to be more of a one level substitution. A makefile is generated, but the DemoDataX targets all have no command lines. All attempts to display the contents of the 'commands' field generate syntax errors.
DEMO_DATA_FILES = $$files($${SRC_DATA_DIR}DemoData/*)
DEMO_DATA_NAME = DemoData
for(FILE, DEMO_DATA_FILES){
$${DEMO_DATA_NAME}.target = $${FILE}
$${DEMO_DATA_NAME}.commands = $$COPY_CMD $${FILE} $${BLD_DATA_DIR}DemoData
PRE_TARGETDEPS += $${FILE}
QMAKE_EXTRA_TARGETS += $${DEMO_DATA_NAME}
DEMO_DATA_NAME = $${DEMO_DATA_NAME}X
}
This approach works, but with two shortcomings. The minor one is that a separate 'make install' step must be performed. The major one is that the files are always copied unconditionally. Since our data files are large, this is unacceptable timewise.
DemoData.path = $${BLD_DATA_DIR}DemoData
DemoData.files = $${SRC_DATA_DIR}DemoData/*
INSTALLS += DemoData
Is there a way to do this, or am I left with some sort of external script or manually generated/maintained makefile?
Use QMAKE_EXTRA_COMPILES feature.
# list your files in this variable.
# Masks are available with $$files functions but
# if your set of files changes (files added or removed)
# your have to re-run qmake after that explicitly, not just make
MYFILES = $$files($${PWD}/files/*.*)
copy_files.name = copy large files
copy_files.input = MYFILES
# change datafiles to a directory you want to put the files to
copy_files.output = $${OUT_PWD}/datafiles/${QMAKE_FILE_BASE}${QMAKE_FILE_EXT}
copy_files.commands = ${COPY_FILE} ${QMAKE_FILE_IN} ${QMAKE_FILE_OUT}
copy_files.CONFIG += no_link target_predeps
QMAKE_EXTRA_COMPILERS += copy_files
Add your big files to MYFILES variable. For each file a rule will be generated in Makefile that copies file to specified directory (datafiles in the example). Original file will be listed as a dependecy in the rule (this is default qmake behaviour) so copy will occur only when original file is fresher than existing copy. Generated rules are listed as dependencies in the target file rule (copy_files.CONFIG += target_predeps) so copying will occur on every build automatically.
The only caveat is this: if your set of files is dynamic (files are added or removed) you can use masks as in my example but you have to be careful to execute qmake after changing the set. Be aware that Qt Creator builds projects by launching make, not qmake. The most simple way to ensure that qmake will be launched is to modify .pro file.
For those who can read Russian there is more info about QMAKE_EXTRA_COMPILERS here
Do you need the script to be cross platform? I personally wouldn't use the copy command, but robocopy on Windows and rsync on Mac/Linux.
win32: $${DEMO_DATA_NAME}.commands = robocopy $${SRC_DIR} $${DST_DIR} $${FILE} /MIR /XO
!win32: $${DEMO_DATA_NAME}.commands = rsync -aru $${FILE} $${BLD_DATA_DIR}
I'm not really sure what you want to copy here, but you get the idea, you can adapt the files and/or directories.
Robocopy parameters are described here.
/MIR Mirrors a directory tree
/XO Excludes older files.
Rsync parameters are described here.
-a Archive
-r Recursive
-u Update only when the source is newer
As a side note if you don't want to run this make install command, you can set this extra target as a dependency to the project that needs these files: theProjectNeedingThoseFiles.depends += DemoData.
In the Emperor project, I'm having some issues getting intltool to work when doing an out-of-tree build. When running make check out-of-tree, which is one of the things make distcheck does, intltool fails thus:
INTLTOOL_EXTRACT="/usr/bin/intltool-extract" XGETTEXT="/usr/bin/xgettext" srcdir=../../po /usr/bin/intltool-update --gettext-package emperor --pot
can't open ../../po/../data/emperor.desktop.in: No such file or directory at /usr/bin/intltool-extract line 212.
intltool is looking for emperor.desktop.in, which is listed in po/POTFILES.in, in the source tree. However, emperor.desktop.in is generated by the configure script from a file called emperor.desktop.in.in, in order to insert the installed executable path as configured by the user, and lands in the build tree.
These are the relevant bootstrap.sh lines:
echo +++ Running intltoolize ... &&
intltoolize --force --copy &&
cat >>po/Makefile.in.in <<EOF
../data/_column_names.h:
cd ../data && \$(MAKE) _column_names.h
EOF
The setup code in configure.ac:
IT_PROG_INTLTOOL([0.35.0])
GETTEXT_PACKAGE=emperor
AC_SUBST(GETTEXT_PACKAGE)
AC_DEFINE_UNQUOTED([GETTEXT_PACKAGE], ["$GETTEXT_PACKAGE"],
[The domain to use with gettext])
AM_GLIB_GNU_GETTEXT
data/emperor.desktop.in is listed in AC_CONFIG_FILES.
data/Makefile.am contains these lines:
desktopdir = $(datadir)/applications
desktop_in_files = emperor.desktop.in
desktop_DATA = $(desktop_in_files:.desktop.in=.desktop)
#INTLTOOL_DESKTOP_RULE#
and po/POTFILES.in contains the line
data/emperor.desktop.in
You can review all the details in the public git repository if you wish.
Can I somehow tell intltool that this file will be located in the build tree, not in the source tree? Otherwise, my options appear to be to break make distcheck (not a great option), or to ship a desktop file that doesn't include the full path and assumes that the executable is installed in the PATH. (just as messy, IMHO) - Any other options?
In your source code you have emperor.desktop.in.in, which does not seem to be in any rule as a dependency. That file has to be converted first to emperor.desktop.in and later to emperor.desktop, which does not seem to be the case in your data/Makefile.am.
desktopdir = $(datadir)/applications
desktop_in_in_files = emperor.desktop.in.in
desktop_in_files = $(desktop_in_in_files:.desktop.in.in=.desktop.in)
desktop_DATA = $(desktop_in_files:.desktop.in=.desktop)
#INTLTOOL_DESKTOP_RULE#
[...]
EXTRA_DIST = \
$(desktop_in_in_files) \
[...]
$(desktop_in_in_files) contains $(desktop_in_in_files), and Makefile will know how to deal with that.
Some further digging has brought me believe that the answer is: intltool does not support source files that aren't source files in the project. Ergo, any additional processing must be done after intltool is through
Intltool requires the lines in POTFILES to be relative to the (build-time) working directory. The file POTFILES is generated by the configure script from POTFILES.in with a simple sed script defined in the IT_PO_SUBDIR autoconf macro (called by IT_PROG_INTLTOOL) that simply prepends the relative location of the top-level source directory to the paths. Alas, modifying POTFILES does not help: the intltool-extract script does everything it can to get the source directory right. I don't believe files that are sometimes inside and sometimes outside the source tree can be supported without modifying intltool itself.
I have imported a huge hierarchy of maven projects into IntelliJ idea, now idea has created .iml projects on all different levels. I would like to svn:ignore these files.
There are several similar questions, for example this one: svn (subversion) ignore specific file types in all sub-directories, configured on root?
However: the answer is always to apply svn:ignore with the --recursive flag. That's not what I am looking for, I just want to ignore the files I created, not set a property on the hundreds of directories underneath.
Basically, what I have is the output of
svn status | grep .iml
which looks like this:
? foo/bar/bar.iml
? foo/baz/phleem.iml
? flapp/flapp.iml
etc.
What I would like to do is for each entry dir/project.iml to add svn:ignore *.iml to dir. I am guessing I have to pipe the above grep to sed or awk and from there to svn, but I am at a total loss as to the exact sed or awk command on one hand and the exact svn command (with which I won't override existing svn:ignore values) on the other hand.
Update: what I am also not looking for are solutions based on find, because possibly there are projects in this hierarchy where .iml files are in fact committed and I wouldn't want to interfere with those projects, so I'm looking to add the property only for .iml files my IDE has created.
You can set up your client to globally ignore given file extensions. Just add
global-ignores = *.iml
into your Subversion config file.
Update: If you want to only ignore iml files in the directories involved, you can try
svn status | grep '^\?.*\.iml' | sed 's=^? *=./=;s=/[^/]*$==' | xargs svn propset svn:ignore '*.iml'
We have a large base of code that contains several shared projects, solution files, etc in one directory in SVN. We're migrating to Mercurial. I would like to take this opportunity to reorganize our code into several repositories to make cloning for branching have less overhead. I've already successfully converted our repo from SVN to Mercurial while preserving history. My question: how do I break all the different projects into separate repositories while preserving their history?
Here is an example of what our single repository (OurPlatform) currently looks like:
/OurPlatform
---- Core
---- Core.Tests
---- Database
---- Database.Tests
---- CMS
---- CMS.Tests
---- Product1.Domain
---- Product1.Stresstester
---- Product1.Web
---- Product1.Web.Tests
---- Product2.Domain
---- Product2.Stresstester
---- Product2.Web
---- Product2.Web.Tests
==== Product1.sln
==== Product2.sln
All of those are folders containing VS Projects except for the solution files. Product1.sln and Product2.sln both reference all of the other projects. Ideally, I'd like to take each of those folders, and turn them into separate Hg repos, and also add new repos for each project (they would act as parent repos). Then, If someone was going to work on Product1, they would clone the Product1 repo, which contained Product1.sln and subrepo references to ReferenceAssemblies, Core, Core.Tests, Database, Database.Tests, CMS, and CMS.Tests.
So, it's easy to do this by just hg init'ing in the project directories. But can it be done while preserving history? Or is there a better way to arrange this?
EDIT::::
Thanks to Ry4an's answer, I was able to accomplish my goal. I wanted to share how I did it here for others.
Since we had a lot of separate projects, I wrote a small bash script to automate creating the filemaps and to create the final bat script to actually do the conversion. What wasn't completely apparent from the answer, is that the convert command needs to be run once for each filemap, to produce a separate repository for each project. This script would be placed in the directory above a svn working copy that you have previously converted. I used the working copy since it's file structure best matched what I wanted the final new hg repos to be.
#!/bin/bash
# this requires you to be in: /path/to/svn/working/copy/, and issue: ../filemaplister.sh ./
for filename in *
do
extension=${filename##*.} #$filename|awk -F . '{print $NF}'
if [ "$extension" == "sln" -o "$extension" == "suo" -o "$extension" == "vsmdi" ]; then
base=${filename%.*}
echo "#$base.filemap" >> "$base.filemap"
echo "include $filename" >> "$base.filemap"
echo "C:\Applications\TortoiseHgPortable\hg.exe convert --filemap $base.filemap ../hg-datesort-converted ../hg-separated/$base > $base.convert.output.txt" >> "MASTERGO.convert.bat"
else
echo "#$filename.filemap" >> "$filename.filemap"
echo "include $filename" >> "$filename.filemap"
echo "rename $filename ." >> "$filename.filemap"
echo "C:\Applications\TortoiseHgPortable\hg.exe convert --filemap $filename.filemap ../hg-datesort-converted ../hg-separated/$filename > $filename.convert.output.txt" >> "MASTERGO.convert.bat"
fi
done;
mv *.filemap ../hg-conversion-filemaps/
mv *.convert.bat ../hg-conversion-filemaps/
This script looks at every file in an svn working copy, and depending on the type either creates a new filemap file or appends to an existing one. The if is really just to catch misc visual studio files, and place them into a separate repo. This is meant to be run on bash (cygwin in my case), but running the actual convert command is accomplished through the version of hg shipped with TortoiseHg due to forking/process issues on Windows (gah, I know...).
So you run the MASTERGO.convert.bat file, which looks at your converted hg repo, and creates separate repos using the supplied filemap. After it is complete, there is a folder called hg-separated that contains a folder/repo for each project, as well as a folder/repo for each solution. You then have to manually clone all the projects into a solution repo, and add the clones to the .hgsub file. After committing, an .hgsubstate file is created and you're set to go!
With the example given above, my .hgsub file looks like this for "Product1":
Product1.Domain = /absolute/path/to/Product1.Domain
Product1.Stresstester = /absolute/path/to/Product1.Stresstester
Product1.Web = /absolute/path/to/Product1.Web
Product1.Web.Tests = /absolute/path/to/Product1.Web.Tests
Once I transfer these repos to a central server, I'll be manually changing the paths to be urls.
Also, there is no analog to the initial OurPlatform svn repo, since everything is separated now.
Thanks again!
This can absolutely be done. You'll want to use the hg convert command. Here's the process I'd use:
convert everything to a single hg repository using hg convert with a source type of svn and a dest type of hg (it sounds like you've already done this step)
create a collection of filemap files for use with hg convert's --filemap option
run hg convert with source type hg and dest type hg and the source being the mercurial repo created in step one -- and do it for each of the filemaps you created in step two.
The filemap syntax is shown in the hg help convert output, but here's the gist:
The filemap is a file that allows filtering and remapping of files and
directories. Comment lines start with '#'. Each line can contain one of
the following directives:
include path/to/file
exclude path/to/file
rename from/file to/file
So in your example your filemaps would look like this:
# this is Core.filemap
include Core
rename Core .
Note that if you have an include that the exclusion of everything else is implied. Also that rename line ends in a dot and moves everything up one level.
# this is Core.Tests
include Core.Tests
rename Core.Tests .
and so on.
Once you've created the broken-out repositories for each of the new repos, you can delete the has-everything initial repo created in step one and start setting up your subrepo configuration in .hgsub files.