Create Working Folders using stcmd - starteam

I have been trying to write an stcmd that checks out code from a StarTeam repository. Here's what the command looks like:
stcmd co -p "Username:Password#localHost:1024/Store Server/Store Server/USB/sources/$OEM$/$$/Setup/Scripts"
Every time I run this code, I get the following response:
C:\StarTeam\Store Server\USB\sources\$OEM$\$$\Setup\Scripts\osConfig.ps1 (The system cannot find the path specified)
I'm guessing I need to have to create the working folder's location in order for my check-out command to work properly. Is there a way to create the working folders of a repository using stcmd? I know I can do it through StarTeam, but I wanted to see if it's possible to create it through stcmd so it can create the folders on new computers when my code runs.

You haven't said which version you're using, but in 5.4 the command to create working directories is:
stdcmd local-mkdir
so you'd need something like:
stdcmd local-mkdir -p "Username:Password#localHost:1024/Store Server/Store Server/USB/sources/$OEM$/$$/Setup/Scripts"
This is the answer to your question, but I'm not sure it'll be the solution to your problem!

Not sure which version of StarTeam you're using, but in 13.0 at least, there's an option -cwf (checkout working folders) which you can append to the check-out command. if you also want this to checkout subfolders of said working folders, you can also append -is (include subfolders, maybe?). So, try:
stcmd co -p "Username:password#localHost:1024/Store Server/Store Server/USB/sources/$OEM$/$$/Setup/Scripts/" -cwf -is

Related

How to change SYMLINK to SYMLINKD in batch script

We're sharing SYMLINKD files on our git project. It almost works, except git modifies our SYMLINKD files to SYMLINK files when pulled on another machine.
To be clear, on the original machine, symlink is created using the command:
mklink /D Annotations ..\..\submodules\Annotations\Assets
On the original machine, the dir cmd displays:
25/04/2018 09:52 <SYMLINKD> Annotations [..\..\submodules\Annotations\Assets]
After cloning, on the receiving machine, we get
27/04/2018 10:52 <SYMLINK> Annotations [..\..\submodules\Annotations\Assets]
As you might guess, a file target type pointing at a a directory [....\submodules\Annotations\Assets] does not work correctly.
To fix this problem we either need to:
Prevent git from modifying our symlink types.
Fix our symlinks with batch script triggered on a githook
We're going we 2, since we do not want to require all users to use a modified version of git.
My limited knowledge of batch scripting is impeding me. So far, I have looked into simply modifying the attrib of the file, using the info here:
How to get attributes of a file using batch file and https://superuser.com/questions/653951/how-to-remove-read-only-attribute-recursively-on-windows-7.
Can anyone suggest what attrib commands I need to modify the symlink?
Alternatively, I realise I can delete and recreate the symlink, but how do I get the target directory for the existing symlink short of using the dir command and parsing the path from the output?
I think it's https://github.com/git-for-windows/git/issues/1646.
To be more clear: your question appears to be a manifestation of the XY problem: the Git instance used to clone/fetch the project appears to incorrectly treat symbolic links to directories—creating symbolic links pointing to files instead. So it appears to be a bug in GfW, so instead of digging it up you've invented a workaround and ask how to make it work.
So, I'd better try help GfW maintainer and whoever reported #1646 to fix the problem. If you need a stop-gap solution, I'd say a proper way would be to go another route and script several calls to git ls-tree to figure out what the directory symlinks are (they'd have a special set of permission bits;
you may start here).
So you would traverse all the tree objects of the HEAD commit, recursively,
figuring out what the symlinks pointing at directories are and then
fixup the matching entries in the work tree by deleting them
and recreating with mklink /D or whatever creates a correct sort of
symlink.
Unfortunately, I'm afraid trying to script this using lame possibilities
of cmd.exe-s scripting facilities would be an exercise in futility.
I'd take some more "real" programming language (PowerShell as an example,
and—since you're probably a Windows shop—even a .NET would be OK).

Retrieving latest file in a directory from a remote server

I was hoping to crack this myself, but it seems I have fallen at the first hurdle because I can't make head nor tale of other options I've read about.
I wish to access a database file hosted as follows (i.e. the hhsuite_dbs is a folder containing several databases)
http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/pdb70_08Oct15.tgz
Periodically, they update these databases, and so I want to download the lastest version. My plan is to run a bash script via cron, most likely monthly (though I've yet to even tackle the scheduling aspect of the task).
I believe the database is refreshed fortnightly, so if my script runs monthly I can expect there to be a new version. I'll then be running downstream programs that require the database.
My question is then, how do I go about retrieving this (and for a little more finesse I'd perhaps like to be able to check whether the remote file has changed in name or content to avoid a large download if unnecessary)? Is the best approach to query the name of the file, or the file property of date last modified (given that they may change the naming syntax of the file too?). To my naive brain, some kind of globbing of the pdb70 (something I think I can rely on to be in the filename) then pulled down with wget was all I had come up with so far.
EDIT Another confounding issue that has just occurred to me is that the file I want wont necessarily be the newest in the folder (as there are other types of databases there too), but rather, I need the newest version of, in this case, the pdb70 database.
Solutions I've looked at so far have mentioned weex, lftp, curlftpls but all of these seem to suggest logins/passwords for the server which I don't have/need if I was to just download it via the web. I've also seen mention of rsync, but of a cursory read it seems like people are steering clear of it for FTP uses.
Quite a few barriers in your way for this.
My first suggestion is that rather than getting the filename itself, you simply mirror the directory using wget, which should already be installed on your Ubuntu system, and let wget figure out what to download.
base="http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/"
cd /some/place/safe/
wget --mirror -nd "$base"
And new files will be created in the "safe" directory.
But that just gets you your mirror. You're still after is the "newest" file.
Luckily, wget sets the datestamp of files it downloads, if it can. So after mirroring, you might be able to do something like:
newestfile=$(ls -t /some/place/safe/pdb70*gz | head -1)
Note that this fails if ever there are newlines in the filename.
Another possibility might be to check the difference between the current file list and the last one. Something like this:
#!/bin/bash
base="http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/"
cd /some/place/safe/
wget --mirror -nd "$base"
rm index.html* *.gif # remove debris from mirroring an index
ls > /tmp/filelist.txt.$$
if [ -f /tmp/filelist.txt ]; then
echo "Difference since last check:"
diff /tmp/filelist.txt /tmp/filelist.txt.$$
fi
mv /tmp/filelist.txt.$$ /tmp/filelist.txt
You can parse the diff output (man diff for more options) to determine what file has been added.
Of course, with a solution like this, you could run your script every day and hopefully download a new update within a day of it being ready, rather than a fortnight later. Nice thing about --mirror is that it won't download files that are already on-hand.
Oh, and I haven't tested what I've written here. That's one monstrously large file.

How to create a batch file in Mac?

I need to find a solution at work to backup specific folders daily, hopefully to a RAR or ZIP file.
If it was on PC, I would have done it already. But I don't have any idea to how to approach it on a Mac.
What I basically want to achieve is an automated task, that can be run with an executable, that does:
compress a specific directory (/Volumes/Audio/Shoko) to a rar or zip file.
(in the zip file exclude all *.wav files in all sub Directories and a directory names "Videos").
move It to a network share (/Volumes/Post Shared/Backup From Sound).
(or compress directly to this folder).
automate the file name of the Zip file with dynamic date and time (so no duplicate file names).
Shutdown Mac when finished.
I want to say again, I don't usually use Mac, so things like what kind of file to open for the script, and stuff like that is not trivial for me, yet.
I have tried to put Mark's bash lines (from the first answer, below) in a txt file and executed it, but it had errors and didn't work.
I also tried to use Automator, but it's too plain, no advanced options.
How can I accomplish this?
I would love a working example :)
Thank You,
Dave
You can just make a bash script that does the backup and then you can either double-click it or run it on a schedule. I don't know your paths and/or tools of choice, but some thing along these lines:
#!/bin/bash
FILENAME=`date +"/Volumes/path/to/network/share/Backup/%Y-%m-%d.tgz"`
cd /directory/to/backup || exit 1
tar -cvz "$FILENAME" .
You can save that on your Desktop as backup and then go in Terminal and type:
chmod +x ~/Desktop/backup
to make it executable. Then you can just double click on it - obviously after changing the paths to reflect what you want to backup and where to.
Also, you may prefer to use some other tools - such as rsync but the method is the same.

explanations about the code

I have some script and I have no idea what it is doing, will be happy if somebody will explain me:
#!/bin/tcsh
if (-d test) then
svn up test
else
svn checkout http:some address test
endif
cd tests
python test_some.py $argv
P.S can't find info about functions cd and svn
thanks in advance for any help
The script runs a second revision-controlled test script
This script runs a python program which appears to run some tests. The script understands that the test directory is stored in a subversion repository.
If there is a test directory, it updates it in case it has been changed in the repository, perhaps by another svn user or by the same user in a different working directory.
If there is no test directory, it checks it out.
Then it changes its current directory to the working directory.
Then it runs the test script.
I'm a bit confused about one thing. It checks out "test" but then changes its directory to "tests". So either there is a transcription error in the original post or something slightly more complex is going on, like, it somehow assumes that tests exists but not test.
cd is the "Change Directory" command.
svn is a source code repository client.
The script does the following:
if the test folder exists
update it through subversion
else
check it out from subversion repository
go into the tests directory // interestingly enough, it doesn't match the checked out directory name?
run the test_some.py python file, passing the script arguments.
cd, and svn, and python are executable names. cd is the command for changing current directory. svn is the command (executable name) for Subversion source control system. python is the Python language interpreter.

Incremental deploy from a shell script

I have a project, where I'm forced to use ftp as a means of deploying the files to the live server.
I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents,
deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository.
(and taking care of user uploaded files and folders, and making post-deploy changes, etc)
It's working well, but the project is starting to get big enough to make the deployment process too long.
I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is)
I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files,
and upload each modified file, and delete each removed file.
I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc.
Two problems:
I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here,
if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations.
hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least.
So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.
You could use a custom style that does the parsing for you.
hg log --rev rev1:rev2 --style mystyle
Then pipe it to sort -u to get a unique list of files. The file "mystyle" would look like this:
changeset = '{file_mods}{file_adds}\n'
file_mod = '{file_mod}\n'
file_add = '{file_add}\n'
The mods and adds templates are files modified or added. There is a similar file_dels and file_del template for deleted files.
Alternatively, you could use hg status -ma --rev rev1-1:rev2 which adds an M or an A before modified/added files. You need to pass a different revision range, one less than rev1, as it is the status since that "baseline". Deleted files are similar - you need the -d flag and a D is added before each deleted file.

Resources