My problem described in the title.
But in addition, I have set in the next directiory C:\OpenServer\modules\yarn Yarn and I don't know what to do next.
I hope for your help because I didn't find anything on Google or in the net.
I found it! Yeh!
We need to change in settings on the 'server' tab one option and create a file that will say which programs OpenServer must get and prefetch.
Related
I hope this is a good place to ask this, otherwise please redirect me to the correct forum.
I have a large amount of data (~400GB) I need to distribute to all nodes in a cluster (~100 nodes). Any help into how to do this will be appreciated, following here is what Ive tried.
I was thinking of doing this using torrents but I'm running into a bunch of issues. These are the steps I tried:
I downloaded ctorrent to create the torrent and seed and download it. I had a problem because I didn't have a tracker.
I found that qbittorrent-nox has an embedded tracker so I downloaded that on one of my nodes and set the tracker up.
I now created the torrent using the tracker I created and copied it to my nodes.
When I run the torrent with ctorrent on the node with the actual data on it to seed the data I get:
Seed for others 72 hours
- 0/0/1 [1/1/1] 0MB,0MB | 0,0K/s | 0,0K E:0,1 Connecting
When I run on one of the nodes to download the data I get:
- 0/0/1 [0/1/0] 0MB,0MB | 0,0K/s | 0,0K E:0,1
So it seems they aren't connecting to the tracker ok, but I don't know why
I am probably doing something very wrong, but I can't figure it out.
If anyone can help me with what I am doing, or has any way of distributing the data efficiently, even not with torrents, I would be very happy to hear.
Thanks in advance for any help available.
but the node thats supposed to be seeding thinks it has 0% of the file, and so it doesn't seed.
If you create a metadata file (.torrent) with tool A and then want to seed it with tool B then you need to point B to both the metadata and the data (the content files) itself.
I know it is a different issue now, and might require a different topic, but Im hoping you might have ideas.
You should create a new question which will have more room for you to provide details.
So this is embarrassing, I might have had it working for a while now, but I did change my implementation since I started. I just re-checked and the files I was transferring were corrupted on one of my earlier tries and I have been using them since.
So to sum up this is what worked for me if anybody else ends up needing the same setup:
I create torrents using "transmission-create /path/to/file/or/directory/to/be/torrented -o /path/to/output/directory/output_file_name.torrent" (this is because qbittorrent-nox doesn't provide a tool that I could find to create torrents)
I run the torrent on the computer with the actual files so it will seed using "qbittorrent-nox ~/path/to/torrent/file/name_of_file.torrent"
I copy the .torrent file to all nodes and run "qbittorrent-nox ~/path/to/torrent/file/name_of_file.torrent" to start downloading
qbittorrent settings I needed to configure:
In "Downloads" change "Save files to location" to the location of the data in the node that is going to be seeding #otherwise that node wont know it has the files specified in the torrent and wont seed them.
To avoid issues with the torrents sometimes starting as queued and requiring a "force resume". This doesn't appear to have fixed the problem 100% though
In "Speed" tab uncheck "Enable bandwidth management (uTP)"
uncheck "Apply rate limit to uTP connections"
In "BitTorrent" tab uncheck "Torrent Queueing"
Thanks for all the help and Im sorry I hassled people for no reason from some point..
Can someone please help me out with this? I tried using long path tool but they want me to pay in order to delete the folder. However I cant find the file the system is complaining about. I went to the folder
C:\Users\Casey\Desktop\Workspace\LegalHoldings\Sprints\Sprint5\Expunctions\LegalHoldings.Expunctions.Service.External\ServiceReferences\FillingReviewMDEService\LegalHoldings.Expunctions.Service.External.FilingReviewMDEService.GetFeesCalculationsResponse.datasource
This file:
FilingReviewMDEService.GetFeesCalculationsResponse.datasource
Does not exist in the folder?!?!?!
I don't know what to do, I have been reading a lot of work-arounds online however most people suggest using long path tool but I remember having this issue in the past but I cant remember how I solved. I believed it was something to do with the Developer Command prompt and resetting some paths.
All help would be greatly appreciated
In VS/TFS 2012, I found this helpful:
[Open TFS Explorer] -> [Right click the root folder] -> Advanced->'Remove Mapping...'
Then, you can change the path:
HTH
Usually these problems can be solved by shortening the paths higher up the tree.
It looks like your local path is the problem, so try mapping your code to a shorter root folder (e.g. C:\code rather than c:\users\Casey\desktop\workspace...)
Alternatively, you may be able to rename some mid level folders in your tfs structure to shorten the paths. But that's more extreme and probably not necessary in this case.
Not sure if you're even using the data binding features which the .datasource file is generated for, but turning that off in your service reference configuration by manually editing the .svcmap file would solve your problem.
After editing make sure you use the Update Reference feature to get rid of the unwanted file:
The second step would be to not map $/ to your user profile in your workspace mapping, but $LegalHoldings/Sprints/Sprint5/Expunctions to something like C:\Workspace\Sprint5 specifically that would drastically reduce the path depth required for your project.
If TFS still has a pending change for this file, you can use the tf utility from your workspace folder
C:\Users\Casey\Desktop\Workspace\LegalHoldings> tf undo $LegalHoldings/Sprints/Sprint5/Expunctions/LegalHoldings.Expunctions.Service.External/ServiceReferences/FillingReviewMDEService/LegalHoldings.Expunctions.Service.External.FilingReviewMDEService.GetFeesCalculationsResponse.datasource
to get rid of the pending change.
I need help setting up my SCSS file watcher in PhpStorm. I'm on Ubuntu, I have PhpStorm 6, I have RVM with Ruby 1.9.3p194 and Sass 3.2.5. I've set my File Watcher options in, Settings >> File Watcher as follows:
Once I had done that, I changed something in my .SCSS file but I got this error.
...-1.9.3-p194/bin/sass --no-cache --update style_update.scss:style_update.css
/usr/bin/env: ruby: No such file or directory
(I added three dots at the begin of the first line to make the line shorter) So what might be the problem?
The problem is that IDE is not able to find ruby in the PATH. Note that it may be different in terminal and in applications that you start from Ubuntu launchpad.
Use the Environment variables option in the file watcher configuration to specify custom PATH value with a directory containing the required executables.
A quote from an answer/comment before:
"[...] ... just do what is written, add PATH variable ... [...]"
... That's exactly a common misunderstanding between helping people (who are mostly not teachers), who do the tasks on a daily basis, and asking people on the other hand, who can imagine 3 different things behind standartized answers and their words. Stackoverflow should extend on this, not repeat manuals or documentations.
In PHPstorm for example, you have 2 empty fields after hitting the plus on the right corner of the Enviroment variable settings window. The left field has the header "Variable", the right field the header "Value". So, if somebody is not familiar with PATH and ENVIROMENT variables of desktop or server systems, this person will be slidely confused about what has to be placed in the first field. Is it "ruby"? Is it "PATH"? Wouldn't it override the whole PATH variable of the system? Is it a custom given NAME I can choose and how does the system knows about it? No explanation can be found.
If you don't know the logic behind, you cannot assume the right steps from this standart formulated advice. While I am very excited about the feature set of PHPstorm I find the documentation slidely too standartized and unexplainable. Thats why many entries have bad votes from readers below. Like if somebody would ask: "how do I bake bred?" and the answerer says: "First you have to prepare flour and create dough, then you can bake bred." So what do the asker has learned from this answer? Exactly. Nothing what he didn't knew already before. Ok, maybe the question was not clear enough, but this is also a common case: how to ask correctly if you don't know what you actually ask for? From where can the person know, that there is a need to understand how to set PATH variables? I think this is what differs between makers and teachers. Teachers learn to communicate that gap. Documentation often lacks of better teachers writing it. People who work in the support team should be better in thinking like teachers.
To become more constructive: The documentation of PHPstorm says in its example: "choose PATH_TO_LIB as NAME and the path to the library for the VALUE field." Again: from where does this PATH_TO_LIB comes? Is it an own given Name, or a prepared empty VARIABE name PHPstorm watches? If something wents wrong and you start to look for issues which may cause this and start to worry about wrong settings you are lost on this questions even as an experienced PHP developer.
I generally prefer using tools like guard and RVM based ruby installations ATM over build in watch file solutions like these from PHPstorm, which mostly look for a system wide ruby and such first. But with rvm we have project based paths to ruby and such. RVM prevents breaking the compiling-chain of long term theme or module developments based on certain gem versions. Watch here http://www.youtube.com/watch?v=CmTuvzbPduI where Sebastian Siemssen (well known Drupal developer) explains why this is a good concept. But to nicely implement this with PHPstorm features, you need a better low level entry to path editing in PHPstorm.
Sadly this involves pressing save again, since this needs the save file event to be triggered. I would love to see a better implementation, flexibility and better explanation of how to go with the PHPstorm in-build watchers to have a refresh on edit at hand.
the CI server was disconnected for a while for some strange reason from the network and when it came back up, jenkins displayed with no jobs. however in the directory where the jobs live, /var/lib/jenkins/jobs/, the two jobs that should appear are there, but don't show any evidence of existence in the web client.
i tried using the 'copy existing job' and then pointed it to /var/lib/jenkins/jobs/existing_test but it tells me: no such job /var/lib/jenkins/jobs/existing_test
any suggestions as to how to get this to work ?
I know that question can be outdated, but a possible a solution is to run jenkins under appropriate user (the one it run previously). This helped me.
ended up just building the jobs brand new, wasn't able to find a fix
At first I would try and look in the jenkins logs, as your data is in /var/lib/jenkins I would guess your log files are in /var/log/jenkins. Maybe you can find out whats wrong from there.
Also you could try the "load configuration from disk" link in the "manage jenkins" view. That should try to reload the configuration files from your directories, and maybe bring your jobs back. Anyways, you should be able to see something in your logs. If the logs are empty check file permissions, I used to have problems with that after updating sometimes.
http://www.ee99ee.com/blog/2009/02/08/how-to-get-aspnet-mvc-working-under-iis-51-on-windows-xp/
Can the following be put into batch commands? If not, is there an alternative where the configuration can be set through an executable without the user having to configure through IIS?
the simplest solution is to include .aspx into the controller name. so the controller is in the following format
{controller}.aspx/{action}/{id}.
You might be able to set those properties with the GET and SET commands of the Adsutil.vbs script (which works with IIS 5.1, would probably be in the directory C:\Inetpub\AdminScripts\).
I tried a little poking around, and couldn't figure out which keys were needed. But you might be able to figure them out using the Metabase Explorer (from the IIS 6.0 ResKit, or here is a third-party version which I haven't tried).
I have been researching this topic as well and first of all I don't agree with the votes down moderators for a perfectly valid question with research done by the asker. Second the answer is no you cannot put IIS commands in a executable the whole point of IIS is that you set it up once and you are done so just set it up correctly the first time and you should be good also that tutorial you are looking at is rubbish. Just my 2 cents!