The environment
Master PC has access to shared drive X
Master PC has Jenkins as a Windows service
Slave PC is a windows PC in same network as master
Slave PC most likely will not have access to drive X (there will be many slaves PCs running this in the future)
The scenario
I need to copy some files from drive X to the slave machine, but this is a conditional step based on a parameter of the job, so this should be a pipeline step as we don't want to copy the files if not needed. The files to copy might be large so stash/unstash is not an option.
So basically my question is, Is there a simple way to solve the scenario without having to give access to X drive to the slave(s) PC?
I think you think you should copy the files to a neutral location, like a binary repo and copy from there.
So ultimately I found that stash has no hard limit, for now I'm using stash/unstash even on large files and there is no error (e.g. 1.5 Gb) until we start using a different method, like the one in Holleoman's answer
Related
My FreeNAS server is slowly dying and before that happens i need to migrate all data in the NAS to a windows server.
The FreeNAS has ZFS Snapshots and i need to restore data from a few days ago to the Windows server.
I have done some research and i can't think of the best way to do this. (i am not linux/Zfs savvy)
So the things i need to do is,
Restore ZFS Snaptshot from a few days ago to a windows Server
I mounted a windows share to the Freenas using mount_smbfs //username:password#server.name/share_name share_name/
I can copy and create files on that share just fine. So I was wondering if it was possible to restore an entire data set from an snapshot to the windows share.
Any help, tips is much appreciated.
Note. I could easily copy all data on a freenas volume to the windows share, but what makes it complicated for me, is restoring data from a snapshot without overwriting the current data on the volume and moving that data to the windows share.
You have two sensible possibilities:
Access the ZFS dataset (shared over SMB) from your Windows Server, then right-click on it in Explorer and choose "Previous Versions". You will get (after a short time depending on the number of snapshots) a list of all snapshots with their dates. You can then either explore them and copy some files over, or you can choose to copy all to another location (e. g. your new share).
Mount the Windows share on FreeNAS like you did, then go to <pool>/<filesystem>/.zfs/snapshot/ (path completion on the shell might be turned off for the .zfs directory, so type it in manually). There you'll find all your snapshots (like you would have on Windows' Previous Versions) and you can copy some or all files over to the new directory.
I would suggest the first way, because you have the GUI and cannot do any harm to the FreeNAS system this way.
On the other hand, have you thought about the possibility of rescuing the system? You did not specify why it's dying, but things like hard drives or mainboards can be swapped quite easily without requiring setting up everything anew. Maybe this would help you more than moving the data off to another, unconfigured system?
I have an awk script that runs on specific log files. Initially we ran this on the machine that generates the log files so all was good, I just basically at the end of the script pointed to the local directory and file I need it to run on, for example: /logs/logfile1
But now, I've added several other machines to help load balance our application, so each time a particular machine is accessed (in round robin fashion) that machine writes its own log file local to that machine.
How do I get the script to run on one machine but access the log files from all of the other machines as well? (I could copy the script and run locally on each of the machines and append the outputs to one file as there are only 5 machines right now, but I figure there is an easier solution).
Also I run CentOS 6.x on these servers if that is helpful
EDIT: I suppose I could create soft links to the other machines, on the machine that is running the script. Just wondering if there is something easier?
Mount the other machines file systems (via ssh, nfs, etc.) on the machine with the script.
Mounting the required directories of all machines on one machine is probably the best solution. However, if you expect the number of machines to increase to a larger number in future, you should try for a scalable solution.
You could have a solution as below:
Maintain a file in one machine with a list of all other machines and their respective directories.
Have a script to telnet/ssh to each of those remote machines and execute your awk script.
Retrieve all the output files via ftp tom one machine and merge.
So I found out how to share folders using Virtual Box and running Windows 8.
I was wondering, if I save files or projects from Windows 8 to the shared folder on my Mac, will TimeMachine backup those files onto my external harddrive? The hard drive is of course formatted for Mac because of that whole debockel, but that is besides the point. Even though the files were made in Windows.
Also...My assumption is that I would not be able to access the files on my external formatted hard drive from Virtual Box running Windows 8. Is this true?
To my knowledge, you cannot access the files on a journaled formatted hard drive from Windows without extra software. If I understand you correctly, you are trying to backup files created in the Windows VM within your Time Machine backup hard drive?
I'm sure you have solved this by now, but you should consider backing up the VM itself. If the files on the Windows Machine are important you can leave them in a shared folder and have time machine back up that folder.
I have a virtual machine (in Virtual PC) that is used to run/update specific COM objects in our solution. Currently, both the host OS and the VM OS have separate workspaces, and I have to check out the files in either location, then check them in separately as work is completed.
It's also a huge branch (several GB of data) that needs to be pulled down over a slow VPN connection. Given that I need the files on my host and the VM, it means pulling this code down twice.
Is there a way I can configure the VM to make use of the Workspace on the host? I'm fairly sure I can map that folder into the VM, but I want, when I check out files in the VM, that it checks them out from the hosts workspace.
Update 1
I tried to fool the system, by setting the _CLUSTER_NETWORK_NAME_ environment variable as per this answer. This certainly allowed Visual Studio to see the workspace as valid for the machine. However, when I rebooted the machine, I couldn't connect to the machine since the Guest and the Host now appear to have the same name.
You cannot have the same workspace on two machines, fullstop. This means that you can fool Team Explorer mapping a common file system for both the machine, but careful! you should always get from one client and not the other.
Now I can suggest you to test this recipe based on DiskMgmt.msc.
Say VM and PM your two clients, say that both map $/YourProj/src and you have $/YourProj/src/Common that you want to download once.
PM workspace mapping is $/YourProj/src -> C:\src.
PM is at least Win7; create a VHD and mount it on C:\src\Common; now you can get latest.
Unmount the VHD, start your VM with the same VHD as a secondary disk. Mount this secondary disk as C:\src\Common inside the VM.
Inside the VM the workspace mapping should be
$/YourProj/src -> C:\src
$/YourProj/src/Common -> (cloacked)
Perforce is downloading files to the external hard-drive connected to my MacBookPro as writable ("777"). It's as if the "allwrite" option is set in my workspace, but it's not.
I thought Perforce was supposed to mark the files read-only until I check them out. Is there a setting somewhere I missed?
Rev. P4V/MACOSX104U/2009.2/236331
MacBookPro OSX 10.5.8
Is your external hard-drive formated as hfs+? If it's FAT32, it will be 777 anyway.
Have you checked if Windows thinks the files are read only after syncing with the Mac client?
Perforce does not like it when you access the same disk location from two different workspaces, nor the same workspace from two different hosts. This is because the server tracks the state of the files on the client; you're begging for your local store to lose synchronization with the depot.
What are you really trying to accomplish here?
I would recommend that you forget about FAT32; put your Windows workspace on an NTFS volume and your Mac workspace on an HFS+ volume. Submit & sync to share the data. Storage is cheap.