I recently converted my laptop to a Ubuntu/Win7 dual boot system, each with their own partitions, plus a third shared partition. I'd like to use Eclipse and access my SVN repository regardless which system I'm booted into at the time.
If I have my local SVN repository on the shared partition, how can I enable the workspaces on both Ubuntu and Windows to the files?
The only other alternative I've come up with is to have each OS have their own working copy, and apply commits and updates as necessary.
Edit for clarification
I'm not asking if its possible to have a single workspace for both Linux and Windows. I had in mind a single source folder in the shared partition that was linked to each workspace. Therefore, the file paths would be OS-specific, and only the source code would be accessed.
I don't think this is really possible.
There are a number of files in the workspace .metadata folder (for instance the definition of the JRE/JDKs or the eclipse path) that will be dependent on the underlying file system (e.g. c:\eclipse for Windows on one side and /home/me/eclipse.
What you might be able to do at best is two different workspaces, one for windows and one for Linux.
These two distinct workspaces, in turn would be sharing a number of project. These projects would not be in the so-called default location (that is under the workspace folder location) of any of these two workspaces but under a separate hierarchy under your shared partition. Yet because of these decoupling you would end up doing several things twice (such as defining launch configurations etc...). Which is fair enough I guess.
Finally, since Linux can read ntfs file systems pretty well (to the exception of ACLs which would actually be a plus) using ntfs3g, you can have your shared partition in NTFS. Windows is less apt at reading/writing ext3fs (let alone ext4fs). Just make sure you mount your NTFS partition with a common character set.
In addition, instead of having a dual boot, you could have Windows run inside a VM (e.g. VirtualBox and share the common data as a linux shared folder either through standard Samba or VirtualBox built-in shared folder mechanism. The difference with a dual boot would be that you could in theory cheat eclipse and access your 2 different workspaces simultaneously yet they would share the same projects. Of course this would require some tweaking from the Samba part for locking management.
I would not recommend sharing the same Eclipse workspace between two operating systems as they use different path syntax. You can also run into character encoding and/or line separators issues in files placed into source control. Typically a source control system will do adjustments to these during "check out" and reverse adjustment on "check in". You can have weird problems if you retrieve code in Windows and check it back in on Linux (or the other way around).
I would recommend keeping your Windows and Linux workspaces separate.
Related
The capacity of my SSD is just 60Gb, and I have just over 5Gb of free space at the moment. Is there a way to install Xcode directly on the external drive? Or to do so I'd have first to make this drive bootable and boot my system from it?
There are various possible solutions, including, making use of symlinks, dual booting two versions of macOS (one on external SSD), and many more.
But the best way I found was to create a new macOS user and change its home directory to external SSD (by going to advanced user settings under Users & Groups System Preferences).
The exact steps I followed:
Create a new APFS partition on external SSD with 100GB storage. (say NewVol)
Create new macOS user and change its home directory to /Volume/NewVol/user
Logged into the new user with external SSD connected, and installed xcode in ~/Application. (i.e. the local Application folder, not /Application)
Why this works best is because you don't need to manually manage symlinks, also symlinks might create problems during builds. All the required files (including builds and temporary files) are stored in user directory, so no space occupied on internal drive. Also, no hassle of installing a complete separate OS, and going through cycles of reboots to switch the OS.
There are a couple of options you can consider.
Move some files to the external drive, instead of installing applications on it. This would be your best bet, since applications have dependancies. Also, if you run them from your SSD, they will get better performance.
If you absolutely need your files on your SSD, and you can't move them, then I would suggest moving any third party applications to see if you can free up space for Xcode, and run it from your SSD.
If the two options above don't work for you, then you will have to try and work with Xcode. There is no easy way to change the install location. Your option here would be to free up some space temporarily, by moving bigger files to an external drive. Then do the Xcode install in your applications folder. Once that's done, move Xcode to the external drive, and take your files back to your SSD. Here is another questions that talks about the same topic.
Many questions on SO say "Windows developer guidelines" or "windows design guidelines" say that you shouldn't write temporary or program data to the Program Files area, but as far as I can tell none of them actually link to a piece of documentation that says as much. Searching the MSDN has yielded me no results. Windows will make the area read-only, so it can be enforced by the OS, but that doesn't mean developers didn't try to write there anyway (e.g., when porting older, XP and earlier based programs forward.)
I realize that it seems odd to ask about it this late into Windows development (since, as a commenter below pointed out, has been enforced by the OS for more than a decade), but a document that says so is sometimes necessary to satisfy people.
With that in mind, Does Microsoft have a document published stating we shouldn't write application data to the Program Files area, and if so, where is it?
From Technical requirements for the Windows 7 Client Software Logo Program:
Install to the correct folders by default
Users should have a consistent and secure experience with the default
installation location of files, while maintaining the option to
install an application to the location they choose. It is also
necessary to store application data in the correct location to allow
several people to use the same computer without corrupting or
overwriting each other's data and settings.
Windows provides specific locations in the file system to store
programs and software components, shared application data, and
application data specific to a user:
Applications should be installed to the Program Files folder by default. User data or application data must never be stored in this
location because of the security permissions configured for this
folder (emphasis added)
All application data that must be shared among users on the computer should be stored within ProgramData
All application data exclusive to a specific user and not to be shared with other users of the computer must be stored in
Users\<username>\AppData
Never write directly to the "Windows" directory and or subdirectories. Use the correct methods for installing files, such as
fonts or drivers
In “per-machine” installations, user data must be written at first run and not during the installation. This is because there is no
correct user location to store data at time of installation. Attempts
by an application to modify default association behaviors at a machine
level after installation will be unsuccessful. Instead, defaults must
be claimed on a per-user level, which prevents multiple users from
overwriting each other's defaults.
And I'm quite sure that there's similar stuff for every Windows version of the NT family going back to Windows NT 4 or even earlier.
See also this question.
Edit: the original link in this post to the Windows 7 Logo program exists no more. Here you find the current link to the Certification requirements for Windows Desktop Apps. See Section 10, Apps must install to the correct folders by default
In later versions of windows (Vista, 7 and of course server versions) access permission are restricted for "special folders" including "Program Files". Even if your program is elevated to have sufficient privileges to write to this folder it is still a bad idea.
I don't know of any guidelines that state this but there is a list of special folders and what they are meant for. The fact that there is a special folder for nearly all types of data I can image means there is no need to use the program files folder.
Portable applications can be run from USB-drives and all and are thus very convenient, but unfortunately they are slow (as a USB drive is usually slower).
How exactly does the installation of a portable app differ from that of a normal app?
I know they do not create registries and all, but then how do they achieve the same thing as other, 'normal' apps?
Any application that stores all required information in a self-contained way can be made "portable".
For exmaple, eclipse doesn't require installation and keeps all preferences within the workspace, so it could be considered portable.
An application which any of the following isn't immediately portable:
Uses the registry
Uses the user's home directory ie: "C:\Users" or "C:\Documents and Settings"
Requires installation of certain files to hard-coded locations
In order to make these applications portable they can be processed or run within a mini-VM (like ThinApp) so that calls that registry calls and file accesses are modified to refer to locations within the USB.
They don't store anything in the registry or on the hard disk. Application configuration options and other settings are saved on the USB drive, usually (but not always) in either an .INI file or an XML file.
They don't have any dependencies on system resources (such as the registry) that require a higher level of security to access, nor do they have any dependencies on any libraries not shipped with the application on the thumb drive (unless the dependencies are commonly found in a typical install).
Most simple apps meet these requirements and could hypothetically be run off a thumb drive.
We have tremendous problems with Visual Studio (2008, if that matters) locking up and slowing down when accessing projects over a network drive. It can take several minutes to open a large Web site project through a mapped drive, and saving even a single file can take a minute or more.
I fired up Wireshark and watched the traffic. VS, it seems, requests massive amounts of files from the network -- there's an enormous amount of SMB traffic. I've done some research, and this traffic seems to stem from two situations.
VS has to have everything in its own process to provide Intellisense.
VS needs to have all the source in order to compile the project.
All the advice I've read seems to boil down to the same thing: work locally, not on a remote machine, then push your code to an integration server via source control.
This would sure solve our problems (VS is quite fast working locally), but what if you can't work locally? What if the project and the infrastructure required to run it is too large and complicated to be replicated on everyone's individual machines?
We've gone 'round this problem a couple times, and the only way we can figure to work on these projects is direct access via a mapped drive. However, the VS slowness and lockups are really becoming a problem.
One solution: we installed VS on the server and work on the projects directly on the servers via RDP. Seriously.
So, I ask:
What does everyone else do? Do you work via the network, or do you replicate projects locally? If remotely, do you suffer from VS performance issues.
We work locally and use SVN to keep all our code on the server.
I find VS 2008 quite slow working locally sometimes so I wouldn't fancy working on a network share.
Trying to compile over a network share is horribly slow using visual studio. Your start times will be bad as the intellisense database is regenerated. Each compilation has to go over the network multiple times. Linking takes forever.
If you need the output of your compilation on the network, I'd recommend doing your compile locally and defining a post-build command to copy the results to your share.
If, as you say, you cannot pull everything locally then I'd suggest your project is too big and needs to be broken up into more manageable chunks. For a multi-tier application, break it up by tier and invest in some form of continuous integration (e.g. CruiseControl) to automatically build individual pieces. In this way you can work locally on an particular piece and pull the pre-build portions from CI for the other pieces of the application.
I'm not terribly surprised that using VS to load projects over a network share has performance issues. VS (in any language) is constantly getting information from files in the project. Once you start loading this over a network you're at the mercy of the underlying network connection. All lags and access issues will directly translate into VS having an issue loading file contents.
I would advise copying the solution locally and using some form of source code control to sync the project on the share.
If the code is too complicated to install on everyone's machine, then don't put it on everyone's machine. Does everyone need to have everything in order to do productive work?
I have 79 projects in my solution that I work with. Several hundred thousand lines of code. I pull my source down everyday from TFS and build it; it's a lot of code, but it's a far better solution than trying to work over a network share.
A more legitimate situation of having the source code on a share is when one has a non-Windows host on which a (number of) virtual Windows machine is running.
I have this exact situation where my desktop machine (the host) is running Debian and I use VMware to run various virtual Windows machines (the guests), including one that has Visual Studio installed so that I can target Windows OS's. Having the source code on a Samba share on the host machine has the following pro's:
The source is not duplicated, so there is no way to confuse different copies while working on several virtual machines at the same time.
I have full control over the source from my preferred OS.
I can turn on and off any of the virtual machines, or roll back to a snapshot, without the risk of loosing changes.
I can build (etc.) from the same source on several machines without having to commit changes before the source fully tested (reason: I have to use Subversion <1.5).
The only problem with this setup is that Visual Studio (6,7,8,9) is painfully slow.
I have mounted the partition (on which the share lies) with "relatime" and this works in as far as the disk activity on the share moderate, but Visual Studio keeps the (virtual) network card occupied all the time.
Any solutions to this would be very appreciated.
I encountered similar problems everytime I worked (work = anything else then just copy / paste files) over a network drive. The problem occured with ZendStudio and Eclipse.
Why not use any kind of source control?
When working on Windows based projects I've always worked locally.
Once at a unix shop (AIX iirc) developers would work via NFS mount and checkin/checkout via RCS...
I'm using VS2005 across to a network share and not having any performance issues. However, it is a new server (Windows Server 2008). I don't have any other data points for VS since using it at work is relatively new for me.
However, some datapoints from using Netbeans for previous projects on a network share... Local build time for my project was 2 minutes on Vista, on a fast dual-core AMD 64-bit machine. For a network share project, on a Server 2003 box, it was 20 minutes. Building that same project from an ancient Tablet PC (1ghz, single core) running XP locally was around 5 minutes. Interestingly enough, the Tablet PC could build on the Server 2003 box in the same 5 minutes.
For those asking "why" on the network share. The network share is automatically backed up, archived, etc. Also, that way I can very easily look at the same projects from multiple machines without having to worry about pushing back into the repository, etc. Once you've gone to having your dev stuff on a device where you can get to it from anywhere/anything, you'll never want to do local storage again!
I have performance problems via network anything, they just aren't good enough yet.
I thought it was common knowledge that disk-speed is one of the major "slowness" factors when it comes to using VS in Windows. Most dev machines I've built have had projects located on 10k RPM RAID0 drives, or at least a single 10k RPM drive. And even then it seems slow sometimes. Just the way it is, I suppose, until VS2009/VS2010 fixes it? :)
From my experience, this lag when working on a network share is 99% due to Intellisense. Disable it and you'll see.
disabling Intellisense indeed speeds up saving and opening files trough a UNC share dramatically
http://blogs.msdn.com/saraford/archive/2007/12/03/did-you-know-how-to-turn-off-intellisense-by-default.aspx
but then again, as stated in other comments, you might as well use a good text editor
I've also experienced the problems with performance mentioned above. It seems to vary from project to project, but I did find one way of speeding up performance significantly for some project types.
Following the advice in this article made a previously unusable project on a network location (it would take minutes to open one file) perform almost like a local project. The basic gist is that you need to grant FULL TRUST to the network location:
To grant permission to all your projects in your Visual Studio Projects folder located on the network, follow these 8 steps:
Open Microsoft .NET Framework 1.1 (or 2.0) Configuration which you'll
find under Administrative Tools in the Control Panel.
Expand Runtime Security Policy | Machine, | Code Groups | All_Code |
LocalIntranet_Zone In the right-hand pane, click Add a Child Code
Group.
In the dialog that follows choose Create a new code group and fill in
a Name like Visual Studio Projects.
Optionally, provide a Description for the Code Group. (You'll see the
description when you click a Code Group in the left tree, helping you
identify the various Code Groups you may have) .
In the Condition Type drop down, choose URL
For the URL field, type something like this:
file://YourServer/My Documents/Visual Studio Projects/*
Under Use existing permission set, choose FullTrust (that is, if you
trust your own applications. If you don't, choose a different
permission set or create a new one).
Not sure why this works, but it made a previously unusable NET 2.0 project perform significantly better.
Original article: http://imar.spaanjaars.com/364/how-do-i-allow-my-visual-studio-net-projects-to-run-from-a-network-location
I was having the same problem. I have a local copy of our build system, which expects certain drive letters, and was also experiencing slowness.
I have solved the problem by adding the following registry keys:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\DOS Devices]
"R:"="\DosDevices\D:\devel\build
"S:"="\DosDevices\D:\devel\src"
Note that the double '\'s above are part of the .reg file format. When using regedit use single '\' throughout.
My build times were divided by 3. :)
I found the info in the wikipedia article on the SUBST command.
As a developer, tools that store configuration/options in the registry are the bane of my life. I can't easily track changes to those options, can't easily port them from machine to machine, and it all makes me really yearn for the good old days of .INI files...
When writing my own applications, what - if anything - should I choose to put in the registry rather than in old-fashioned configuration files, and why?
Originally (WIN3) configuration was stored in the WIN.INI file in the windows directory.
Problem: WIN.INI grew too big.
Solution (Win31): individual INI files in the same directory as the program.
Problem: That program may be installed on a network and shared by many people.
Solution(Win311): individual INI files in the user's Window directory.
Problem: Many people may share a windows folder, and it should be read-only anyway.
Solution (Win95): Registry with separate sections for each user.
Problem: Registry grew too big.
Solution (WinXP): Large blocks of individual data moved to user's own Application Data folder.
Problem: Good for large amounts of data, but rather complex for small amounts.
Solution (.NET): small amounts of fixed, read-only data stored in .config (Xml) files in same folder as application, with API to read it. (Read/write or user specific data stays in registry)
Coming at this both from a user perspective and a programmers perspective I would have to say there really isn't a good exceuse to put something in the registry unless it is something like file associations, or machine specific settings.
I come from the school of thought that says that a program should be runnable from wherever it is installed, that the installation should be completely movable within a machine, or even to another machine and not affect the running of it.
Any configurable options, or required dlls etc, if they are not shared should reside in a subdirectory of the installation directory, so that the whole installation is easily moved.
I use a lot of smaller utility like programs, so if it cant be installed on a usb stick and plugged into another machine and just run, then its not for me.
When - You are forced to due to legacy integration or because your customer's sysadmin says "it shall be so" or because you're developing in an older language that makes it more difficult to use XML.
Why - Primarily because the registry is not as portable as copying a config file that is sitting next to the application (and is called almost the same).
If you're using .Net2+ you've got App.Config and User.Config files and you don't need to register DLL's in the registry so stay away from it.
Config files have their own issues (see below), but these can be coded around and you can alter your architecture.
Problem: Applications needed configurable settings.
Solution: Store settings in a file (WIN.INI) in the Windows folder - use section headings to group data (Win3.0).
Problem: WIN.INI file grew too big (and got messy).
Solution: Store settings in INI files in the same folder as the application (Win3.1).
Problem: Need user-specific settings.
Solution: Store user-settings in user-specific INI files in the user's Window directory (Win3.11) or user-specific sections in the application INI file.
Problem: Security - some application settings need to be read-only.
Solution: Registry with security as well as user-specific and machine-wide sections (Win95).
Problem: Registry grew too big.
Solution: User-specific registry moved to user.dat in the user's own "Application Data" folder and only loaded at login (WinNT).
Problem: In large corporate environments you log onto multiple machines and have to set EACH ONE up.
Solution: Differentiate between local (Local Settings) and roaming (Application Data) profiles (WinXP).
Problem: Cannot xcopy deploy or move applications like the rest of .Net.
Solution: APP.CONFIG XML file in same folder as application - , easy to read, easy to manipluate, easy to move, can track if changed (.Net1).
Problem: Still need to store user-specific data in a similar (i.e. xcopy deploy) manner.
Solution: USER.CONFIG XML file in user's local or roaming folder and strongly-typed (.Net2).
Problem: CONFIG files are case-sensitive (not intuitive to humans), require very specific open/close "tags", connection strings cannot be set at run-time, setup projects cannot write settings (as easily as registry), cannot easily determine user.config file and user settings are blown with each new revision installed.
Solution: Use the ITEM member to set connection strings at runtime, write code in an Installer class to change the App.Config during install and use the application settings as defaults if a user setting is not found.
Microsoft policy:
Before windows 95, we used ini files for application data.
In the windows 95 - XP era, we used the registry.
From windows Vista, we use ini files although they are now xml based.
The registry is machine dependent. I have never liked it because its getting to slow and it is almost imposible to find the thing you need. That's why I like simple ini or other setting files. You know where they are (application folder or a user folder) so they are easy portable, and human readable.
Is the world going to end if you store a few window positions and a list of most recently used items in the Windows registry? It's worked okay for me so far.
HKEY-CURRENT-USER is a great place to store trivial user data in small quantities. That's what it's for. It seems silly not to use for its intended purpose just because others have abused it.
Registry reads and writes are threadsafe but files are not. So it depends on whether or not your program is single threaded.
Settings that you want to have available in a user's roaming profile should probably go in the registry, unless you actually want to go to the effort of looking for the user's Application Data folder by hand. :-)
If you are developing a new app and you care about portability you should NEVER store data in windows registry since other OS don't have a (windows) registry (duh note - this may be obvious but gets often overlooked).
If you're only developing for Win platforms ... try to avoid it as much as possible. Config files (possibly encrypted) are a way better solution. There's no gain in storing data into the registry - (isolated storage is a much better solution for example if you're using .NET).
Slightly off-topic, but since I see people concerned about portability, the best approach I've ever used is Qt's QSettings class. It abstracts the storage of the settings (registry on Windows, XML preference file on Mac OS and Ini files on Unix). As a client of the class, I don't have to spend a brain cycle wondering about the registry or anything else, it Just Works (tm).
http://doc.trolltech.com/4.4/qsettings.html#details
Personally I have used the registry to store install paths for use by the (un)install scripts. I'm not sure if this is the only possible option, but seemed like a sensible solution. This was for an app that was solely in use on Windows of course.
Usually, if you don't put settings in registry, you use it mostly to get current Windows settings, change file associations, etc.
Now, if you need to detect if your software is already installed, you can make a minimal entry in registry, that's a location you can find back in any config. Or search a folder of given name in Application Data.
If I look at my Document and Settings folder, I see lot of softwares using the Unix dot notation for setting folders:
.p4qt
.sqlworkbench
.squirrel-sql
.SunDownloadManager
.xngr
.antexplorer
.assistant
.CodeBlocks
.dbvis
.gimp-2.4
.jdictionary
.jindent
.jogl_ext (etc.)
and in Application Data, various folders with editor names or software names. Looks like being the current trend, at least among portable applications...
WinMerge uses a slightly different approach, storing data in registry, but offering Import and Export of options in the config dialog.
I believe that Windows Registry was a good idea, but because of great abuse from application developers and standard policies not encouraged/mandated by Microsoft grew into an unmanageable beast. I hate using it for the reasons you've mentioned, there are however some occasions that it makes sense using it:
Leaving a trace of your application after your application has been uninstalled (e.g. remember user's preferences in case the application is installed again)
Share configuration settings between different applications - components
In .NET there really is NOT ever a need.
Here are 2 examples that show how to use Project proerties to do the this.
These examples do this by Windows User Project Properties, but the same could/can be done by Application as well.
More here:
http://code.msdn.microsoft.com/TheNotifyIconExample
http://code.msdn.microsoft.com/SEHE
(late to the discussion but) Short Answer: Group Policy.
If your customer's IT department wants to enforce settings related to Windows or the component(s) you're writing or bundling in, such as a link speed, or a custom error message, or a database server to connect to, this is still typically done via Group Policy, which makes its ultimate manifestation as settings stored in the registry. Such policies are enforced from the time Windows starts up or the user logs in.
There are tools to create custom ADMX templates that can map your components' settings to registry locations, and give the administrator a common interface to enforce policies (s)he needs to enforce while showing them only those settings that are meaningful to enforce this way.