Is there a more rapid method to download gcc-cilkplus? - gcc

I am trying to create a parallel application in C++ and I chose to use the Intel Cilk-Plus Libraries for this reason.
My problem is just that I am still trying to download the extension for g++ and compile it on my machine, but this thing is taking ages. As explained in the Intel Cilk Library installation doc, I am trying to get the compiler sources to compile them. I am actually trying to go for the GCC 4.9 release.
Trying SVN...
I did try using svn but this is very slow and many times the program fails and forces me to restart from where it broke.
Trying GIT...
When I try git it is even worse. The command executes but the program fails telling me that there is ome broken stuff on server side... Guess their git repository is not well-formed.
Bruteforse: WGET
So I decided to cut the head to the chicken and have a direct recursive download using wget:
wget -r -l 0 -np --erobots=off http://gcc.gnu.org/svn/gcc/branches/cilkplus/
It has been downloading since yesterday... I there a damn tarball to download or I really need to download using wget without any progress info? Thankyou

As asked by the OP, I'll post my comment as an answer.
To get the cilk-plus sources as a tarball:
In the install docs, there is a section labelled a. cilkplus. Under it there is a subsection labelled iii. Using a snapshot.
In that subsection there is a link to the gcc git repository. Click on it.
When the webpage is loaded, search at the top for the first link. At the time of writing this it was named [gcc/]. However, the name may change.
The link we need is the one with colored tags next to it.
Once located search, in the same line, for a link named snapshot at the right corner.
Click on it and wait while the tarball is generated.
NOTE: The link provided in the docs sometimes point to a rather old revision of the source code. To get the most recent one (1 or 2 days old at most) click on the summary section in the menu bar. There should appear an entry with the colored tags master and trunk next to it.

Related

How to download a project folder from sourcecode.apple.com?

If I'm looking at an Apple opensource page like this:
https://opensource.apple.com/source/Chess/
How can I download one of those projects to my hard drive so I can open it in XCode?
The main stumbling block for me is simply downloading one of the root folders (projects).
There is a similar existing question, but it is specific to the "wget" utility (this question is more general) and its best answer only suggests this official Apple OSS github repo, but that does NOT include all the projects contained within opensource.apple.com, for example, it only contains the most recent version of chess, not ANY of the previous ones.
So, on opensource.apple.com, I cannot:
Right-click and select download, because the folders are just links to more HTML, not directly to files.
FTP to the url, because I don't have an FTP app installed on my Mac, and even if I did, I don't know if the Apple site would accommodate this.
Download each and every file one-by-one, recreating the local folder structure manually... because that seems foolish.
And as stated, while it is trivial to download from the Apple OSS github page, it doesn't contain the code I need!
I Googled this and surprisingly can't find anything.
So, is there a way to easily download from sourcecode.apple.com?
It looks like that's an older interface to access the open source code.
I went to https://opensource.apple.com/, clicked on the View Releases button which takes you to https://opensource.apple.com/releases/. There you can browse the separate projects. For example, to get the latest version of Chess, click on macOS, then macOS 13.0, then find Chess-466.4.1. There should be a download link and/or a link to the project on Github.
For instance, all open source projects for macOS 13.0: https://github.com/apple-oss-distributions/distribution-macOS/tree/macos-130.

Apple File System (APFS) Check if file is a clone on Terminal (shell)

With macOS High Sierra a new file system is available: APFS.
This file system supports clone operations for files: No data duplication on storage.
cp command has a flag (-c) that enables cloning in Terminal (shell).
But I didn't find a way to identify theses cloned files after.
Somebody knows how to identify cloned files with a shell command, or a flag in a existent command, like ls?
After 3 years and 2 months... I received a lot of points because of this question here on stackoverflow.
So yesterday I decided to revisit this topic :).
Using fcntl and F_LOG2PHYS is possible to check if files are using same physical blocks or not.
So I made an utility using this idea and put it on github (https://github.com/dyorgio/apfs-clone-checker).
It is only the first release guys, but I hope that the community can improve it.
Now maybe a good tool to remove duplicated files using clone APFS feature can be born. >:)
The command you have used, is not a feature of APFS-Filesystem. The CP -c command calls a function named "clonefile" which is part of bsd since 2015 (s. Man-Page)
http://www.manpagez.com/man/2/clonefile/
So if you clone a file for example, you can change attributes from Original and the Clone can have diffrent Attributs.
I think, the Feature, you are searching for is build in per Copy and Write. You can see the different, if you make a clone with Time Machine.
A have not found a commando per Terminal today, to show this differences, but the clonefile command therefore is not the right function.
The only Known-Way today to Show changed Attributes in Clones is Apple Time Machine Backup Solution.
It`s a Snapshot Solution. Something about this, in this Apple Dev Support-Case:
https://forums.developer.apple.com/thread/81171
I think this is meant to be an internal proprietary feature of APFS that you are not supposed to be playing with. It strikes me as a relatively useless feature. If you want to have two files that are the same and use standard APIs, try either hard or soft links, or else Apple aliases.

Need to change shebang for Strawberry Perl [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
This isn't so much a "question" as a methodology I found which I believe is working.
Most servers use the path "/usr/bin/perl" ... but Strawberry Perl (strawberryperl.com) use their own unique path of "/strawberry/perl/bin" (!!) (I tried installing into a different directory as allowed by the prompt, but it wouldn't work at all then! I read somewhere that some files within the installation are 'hardcoded' to the above path.)
I am not looking forward to having to rename all the shebangs of around 400 offline files, and then having to change them all again when uploaded, and I sought another solution. I found it in something called a "symbolic link".
Basically, it's an internal Windows redirect. It basically says "If you see the path as 'usr/bin/perl' then instead go to 'strawberry/perl/bin' " There are two ways to set this up.
The first is to open up a command line terminal ("CMD" in Windows search box, then click "cmd.exe") You use "cd.." to get back to the "C:>" prompt, and then enter "/d usr\bin\perl starwberry\perl\bin\perl.exe" and click enter. This will set up the <==> symbolic link. (Note directions of the slashes) That's OK for a one time use. (It may work without adding ".exe", but to be sure...)
But I design websites offline, so I need the redirect to be set up each time I boot up. You can do this as well with a batch file.
Using a text file, enter the same data as you did at the prompt, and save it as a ".bat" file to your startup folder, (as found in the left menu when clicking "Start" button lower left) You may well find other icons for programs that also initiate at startup within this folder.
I'm 99% certain this is working, because I went into the 'usr/bin/perl' and renamed the executable files as 'perl_old.exe' and 'perl_5.12.4_old.exe" and "wperl_old.exe" (so that if a Perl script DID access "usr\bin\perl" it wouldn't find any program to run) ... and the file still ran when I put URL into the browser.
So why the switch from ActiveState? I wanted to install a particular library. I tried it via PPM and was told I didn't have authorisation. No, this isn't an "Administrator Rights" issue of Windows; it's the fact that ActiveState now want to charge $999 for access to certain files. "Well, you can still use 'dmake' to create the files downloaded direct from CPAN" Er, no, you can't ... because "dmake" is one of the files under lock and key! And without that, you cannot install ANY file from CPAN. (The term "Holding You To Ransom" springs to mind.)
Using Strawberry Perl, it's just a case of starting a command line terminal, (CMD) moving back to the root (C:>) and typing "cpan". You now type "install MODULE::name". Boom! All the files for that particular module are downloaded and installed using the "make.pl" associated with that program.
We won't get into the debate of a company charging to access items in the public domain; they're a business after all.
I know this might be teaching your grandmother to such eggs to some of the more advanced users, but there may be other people on the verge of renaming all their files when switching to Strawberry Perl. Oh, I believe their program suite also include C, C++ and Fortran compilers (no, I've no idea either!). One downside: Due to all the extra program features they install, the directory is THREE TIMES LARGER than the "ActiveState" installation!
I'm pretty sure your problem with ActivePerl is that you're using an older version. I've just done:
C:\Users\myaccount\Documents>perl -MCPAN -e shell
It looks like you don't have a C compiler and make utility installed. Trying
to install dmake and the MinGW GCC compiler using the Perl Package Manager.
This may take a a few minutes...
Downloading ActiveState Package Repository dbimage...done
Downloading MinGW-4.6.3...done
Downloading dmake-4.11.20080107...done
Unpacking MinGW-4.6.3...done
Unpacking dmake-4.11.20080107...done
Generating HTML for MinGW-4.6.3...done
Generating HTML for dmake-4.11.20080107...done
Updating files in site area...done
2759 files installed
Please use the `dmake` program to run commands from a Makefile!
cpan shell -- CPAN exploration and modules installation (v2.05)
Enter 'h' for help.
cpan>
Using version:
This is perl 5, version 20, subversion 1 (v5.20.1) built for MSWin32-x86-multi-thread-64int
ActiveState has a policy of not keeping fully up to date on older versions, because of the support overhead. You can see - for example - their builds of dmake here:
https://code.activestate.com/ppm/dmake/
From their web page:
Looking for access to older versions of ActivePerl?
Community Edition offers access to the newest versions of ActivePerl.
Access to older versions (Perl 5.6, 5.8, 5.10, 5.12, 5.14, 5.16) is available in Business Edition and Enterprise Edition.
E.g. to use the version you're currently using (5.12), you'd need to buy support. But you could use 5.18 or 5.20 for free.
I would also note: Windows doesn't use shebang paths anyway; it uses file associations.

Compiling Codeigniter's User Guide

I'm pulling latest Codeigniter from
https://github.com/EllisLab/CodeIgniter
However, when I check this user-guide page
https://github.com/EllisLab/CodeIgniter/tree/develop/user_guide_src
It is not something that I can view. It should have been HTML. It needs to be compiled I think.
I don't want to use
http://codeigniter.com/user_guide/
this guide because it is not as updated as this one.
There have been a lot of enhancements after 2.1.0 stable released.
How to compile this?
Actually it is written on the Codeigniter page. But I have never forked CI. How can I do it?
So where's the HTML?
Obviously, the HTML documentation is what we care most about, as it is
the primary documentation that our users encounter. Since revisions to
the built files are not of value, they are not under source control.
This also allows you to regenerate as necessary if you want to
"preview" your work. Generating the HTML is very simple. From the root
directory of your user guide repo fork issue the command you used at
the end of the installation instructions:
make html You will see it do a whiz-bang compilation, at which point
the fully rendered user guide and images will be in build/html/. After
the HTML has been built, each successive build will only rebuild files
that have changed, saving considerable time. If for any reason you
want to "reset" your build files, simply delete the build folder's
contents and rebuild.
I found how I can do it. However, my host "Hostgator" doesn't not allow me to install sphinx. I think it will be better to ask an Unix user to compile it for me .
read the documentation...
https://github.com/EllisLab/CodeIgniter/blob/develop/user_guide_src/README.rst
download the guide to your local machine and compile it yourself.

Lightweight version control for small projects (prototypes, demos, and one-offs) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
Background
I work on a lot of small projects (prototypes, demos, one-offs, etc.). They are mostly coded in Visual Studio (WPF or ASP.NET with code written in C#). Usually, I am the only coder. Occasionally, I work with one other person. The projects come and go, usually in a matter of months, but I have a constantly evolving set of common code libraries that I reuse.
The problem
I've tried to use source control software before (SourceGear Vault), but it seemed like a lot of overhead when working on a small project, especially when I was the only programmer. Still, I would like some of the features that version control offers.
Here's a list of features I'd like to have:
Let me look at any file in an older version of my project instantly. Please don't force me through the rigmarole of (1) checking in my current work, (2) reverting my local copy to the old version, and (3) checking the current version back out so I can once again work on it.
In fact, if I'm the only one on the project, I don't ever want to check out. The only thing I want to be able to do is say, "Please save what I have now as version 2.5."
Store my data efficiently. If I have 100 Mb of media in my project, I don't want that to get copied with every new version I release. Only copy what changes.
Let me keep my common library code files in a single location on my hard drive so that all my current projects can benefit from any bug fixes or improvements I make to my library. I don't want to have to keep copying my library to other projects every time I make a change.
However, do let me go back in time to any version of any project and see what the source code (including the library code) looked like at the time that version was released.
Please don't make me store a special database server on my machine that makes my computer take longer to start up and/or uses resources when I'm not even programming.
Does this exist?
If not, how close can I get?
Edit 1: TortoiseSVN impressions
I did some experimenting with Subversion. A couple observations:
Once you check something in to a repository, it does stuff to your files. It puts these hidden .svn folders inside your project folders. It messes with folder icons. I'm still yet to get my project back to "normal". Unversion a working copy got me part of the way there, but I still have folders with blue question mark icons. This makes me grumpy :-/ Update: finally got rid of the folder icons by manually creating new folders and copying the folders over. (Not good.)
I installed the open source plugin for Visual Studio (AnkhSVN). After creating a fresh repository in my hard drive, I attempted to check in a solution from Visual Studio. It did exact what I was afraid it would do. It checked in only the folders and files that are physically (from the POV of the file system) inside my solution folder. In order to accomplish item #5 above, I need all source code used by solution to be check in. I attempted to do this by hand, but it wasn't a user friendly process (for one thing, when I selected multiple library projects at once and attempted to check them in, it only appeared to check in the first one). Then, I started getting error dialogs when I tried to check in subsequent projects.
So, I'm a little frustrated with SVN (and its supporting software) at this point.
Edit 2: TortoiseHG impressions
I'm trying out Mercurial now (TortoiseHG). It was a little bit difficult to figure out at first, no better or worse than TortoiseSVN I'd say. I noticed an RPC Server on startup (relates to item 6). I figure it should be possible to turn this off if I'm not sharing anything with anyone, but it wasn't something I could figure out just by looking at the options (will check out the help later).
I do appreciate having my local repository as just a single .hg folder. And, simply throwing the folder in the Recycle Bin seemed to be all I needed to do to return everything back to normal (i.e., unversion my project). When I check in (commit), it seems to offer a simple comment window only. I thought maybe there would be a place to put version numbers.
My (probably not very clever) attempt to add a Windows shortcut (a folder aliasing my library projects) failed, not that I really thought it would work :) I thought maybe this would be a sneaky way to get my library projects (currently located elsewhere) included in the repository. But no. Maybe I'll try out "subrepos", but that feature is under construction. So, iffy that I'll be able to do items 4 and 5 without some manual syncing.
Any of the distributed source control solutions seem to match your requirements. Take a look at bazaar, git or mercurial (already mentioned above). Personally I have been using bazaar since v0.92 and have no complaints.
Edit: Heck, after looking at it again, I'm pretty sure any of those 3 solutions handles all 6 of your requested features.
Distributed Version Control Systems (Mercurial, Bazaar, Git) are nice in that they can be completely self-contained in a single directory (.hg, .bzr, .git) in the top of the working copy, where Subversion uses a separate repository directory, in addition to .svn directories in every directory of your working copy.
Mercurial and Subversion are probably the easiest to use on Windows, with TortoiseHG and TortoiseSVN; the Bazaar GUIs have also been improving. Apparently there is also TortoiseGit, though I haven't tried it. If you like the command line, Easy Git seems to be a bit nicer to use than the standard git commands.
I'd like to address point 4, common libraries, in more detail. Unfortunately I don't think any of them will be too easy to use, since I don't think they're directly supported by GUIs (I could be wrong). The only one of these I've actually used in practice is Subversion Externals.
Subversion is reasonably good at this job; you can use Externals (see the chapter in the SVN book), but to associate versions of a project with versions of a library you need to "pin" the library revision in the externals definition (which is itself versioned, as a property of the directory).
Mercurial supports something similar, but both solutions seem a bit immature: subrepository support built-in to the latest version and the "Forest Extension".
Git has "submodule" support.
I haven't seen anything like sub-respositories or sub-modules for Bazaar, unfortunately.
I think Fog Creek's new product, Kiln, will get you pretty close. In response to your specific points:
This is easily done through the web interface -- you don't need to touch your local copy or update. Just find the file you want, click the revision you want to see, and your code will be in front of you.
I'm not sure you can do things exactly like "Please save this as version 2.5", but you can add unique tags to changesets that allow you to identify a special revision (where "special" can mean whatever it wants to you).
Mercurial does a great job of this already (which Kiln uses in the back end), so there shouldn't be any problems in this regard.
By creating different repositories, you can easily have one central 'core' section which is consistent across various projects (though I'm not entirely sure if this is what you're talking about).
I think most version control systems allow you to do this...
Kiln is hosted, so there's no hit on performance to your local machine. The code you commit to the system is kept safe and secure.
Best of all, Kiln is free for up to two licenses by way of their Student and Startup Edition (which also gets you a free copy of FogBugz).
Kiln is in public beta right now -- you can request your account at my first link -- and users are being let as more and more problems are already resolved. (For some idea of what current beta users are saying, take a look at the Kiln Knowledge Exchange site that's dedicated to feedback.)
(Full Disclosure: I am an intern currently working at Fog Creek)
For your requirements I would recommend subversion.
Let me look at any file in an older version of my project instantly. Please don't force me through the rigmarole of (1) checking in my current work, (2) reverting my local copy to the old version, and (3) checking the current version back out so I can once again work on it.
You can use the repository browser of Tortoise Svn to navigate to every existing version easily.
In fact, if I'm the only one on the project, I don't ever want to check out. The only thing I want to be able to do is say, "Please save what I have now as version 2.5."
This is done by svn copy . svn://localhost/tags/2.5.
Store my data efficiently. If I have 100 Mb of media in my project, I don't want that to get copied with every new version I release. Only copy what changes.
Given by subversion.
Let me keep my common library code files in a single location on my hard drive so that all my current projects can benefit from any bug fixes or improvements I make to my library. I don't want to have to keep copying my library to other projects every time I make a change.
However, do let me go back in time to any version of any project and see what the source code (including the library code) looked like at the time that version was released.
Put your libraries into the same svn repository as your remaining code and you'll have global revision numbers to switch back all to a common state.
Please don't make me store a special database server on my machine that makes my computer take longer to start up and/or uses resources when I'm not even programming.
You only have to start svnserve to start a local server. If you only work on one machine you can even do without this and use your repository directly.
I'd say that Mercurial along with TortoiseHg will do what you want. Of course, since you don't seem to be requiring much, subversion with TortoiseSvn should serve equally well, if you only ever work alone, though I think mercurial is nicer for collaboration.
Mercurial:
hg cat --rev 2.5 filename (or "Annotate Files" in TortoiseHg)
hg commit ; hg tag 2.5
Mercurial stores (compressed) diffs (and "keyframes" to avoid having to apply ten thousand diffs in a row to find a version of a file). It's very efficient unless you're working with large binary files.
Symlink the library into all the projects?
OK, now that I read this point I'm thinking Mercurial's Subrepos are closer to what you want. Make your library a repository, then add it as a subrepository in each of your projects. When your library updates you'll need to hg pull in the subrepos to update it, unfortunately. But then when you commit in a project Mercurial will record the state of the library repo, so that when you check out this version later to see what it looked like you'll get the correct version of the library code.
Mercurial doesn't do that, it stores data in files.
Take a look on fossil, its single exe file.
http://www.fossil-scm.org
As people have pointed out, nearly any DVCS will probably serve you quite well for this. I thought I would mention Monotone since it hasn't been mentioned already in the thread. It uses a single binary (mtn.exe), and stores everything as a SQLite database file, nothing at all in your actual workspace except a _MTN directory on the top level (and .mtn-ignore, if you want to ignore files). To give you a quick taste I've put the mtn commands showing how one carries out your wishlist:
Let me look at any file in an older version of my project instantly.
mtn cat -r t:1.8.0 readme.txt
Please save what I have now as version 2.5
mtn tag $(mtn automate heads) 2.5
Store my data efficiently.
Monotone uses xdelta to only save the diffs, and zlib to compress the deltas (and the first version of each file, for which of course there is no delta).
Let me keep my common library code files in a single location on my hard drive so that all my current projects can benefit from any bug fixes or improvements I make to my library.
Montone has explicit support for this; quoting the manual "The purpose of merge_into_dir is to permit a project to contain another project in such a way that propagate can be used to keep the contained project up-to-date. It is meant to replace the use of nested checkouts in many circumstances."
However, do let me go back in time to any version of any project and see what the source code (including the library code) looked like at the time that version was released.
mtn up -r t:1.8.0
Please don't make me store a special database server on my machine
SQLite can be, as far as you're concerned, a single file on your disk that Monotone stores things in. There is no extra process or startup craziness (SQLite is embedded, and runs directly in the same process as the rest of Monotone), and you can feel free to ignore the fact that you can query and manipulate your Monotone repository using standard tools like the sqlite command line program or via Python or Ruby scripts.
Try GIT. Lots of positive comments about it on the Web.

Resources