I need to split item for path for example
/Users/i0564454/go/src/projectA/node
I need at the end it will be like this
/Users/i0564454/go/src/projectA/projectA
everything is the same except remove the last item and duplicate the (new) last item
Does file path can help without too many iterations ?
https://gowalker.org/path/filepath
This is possible with the standard library too. You may use / combine:
path.Dir() to get the folder (remove the last element)
path.Base() to get the last element
and path.Join() to join path elements.
Without error checking (e.g. if the passed path has folders), the following function does what you want:
func convert(s string) string {
dir := path.Dir(s)
return path.Join(dir, path.Base(dir))
}
Testing it:
fmt.Println(convert("/Users/i0564454/go/src/projectA/node"))
Output (try it on the Go Playground).
/Users/i0564454/go/src/projectA/projectA
Note that package path handles slash-separated paths. If you need / want to support OS-specific paths, use the identical functions of the path/filepath package.
Related
I have hosts of two types: wirelessHostA[0..N], wirelessHostB[0..N]. I want to declare each of hosts wirelessHostA[0..N] to send messages to respective wirelessHostB[0..N]. Example: A[0] sends to B[0], A[10] sends to B[10]. Expression-wise I have got something like this:
*.wirelessHostA[0..${N}].app[ * ].destAddresses = "wirelessHostB[0..${N}]"
although this one is not correct. I am a bit unsure about how to declare a variable that can be iterated during a run and not a value per run.
You should not see the lines in the INI file as assignments where you can create procedural constructs like loops etc. Instead think about them as pattern matching rules. When a module needs a parameter, it scans the INI file from start, line by line and tries to match the first part (i.e. the part before =) to the current module path. If it matches, it assigns the second part to the parameter. If not, in continues with the next line in the INI file.
So first, write a pattern rule, then a value that can be evaluated in that context. When you specify the value, you may refer to other parameters (that are available in the module's context) or you may refer to other extra contextual information, such as the matching submodule's index (if it is part of a vector). There are other functions to access the index of parent of etc.
In this case, we have a submodule vector of hosts where each one contains a submodule vector of apps. The index operator would return the index of the current context module (which is the position in the app vector), but we need actually the index of the parent of the app vector (which is the host vector). There is a NED function for this too, called parentIndex(). So the solution would look like this:
*.wirelessHostA[*].app[*].destAddresses = "wirelessHostB[" + string(parentIndex()) + "]"
See https://doc.omnetpp.org/omnetpp/manual/#sec:ned-functions:category-ned for more info.
Is there any way to load a text file in Processing while ignoring the case of the file name? I am opening multiple csv files, and some have the extension capitalized, ".CSV" rather than the standard ".csv", which results in errors due to the loadStrings() function being case-sensitive.
String file = sketchPath("test.csv");
String[] array = loadStrings(file);
The above gives the error:
This file is named test.CSV not test.csv. Rename the file or change your code.
I need a way to make the case of the file name or extension not matter. Any thoughts?
Short answer: No. The case-sensitivity of files comes from the operating system itself.
Longer answer: you could create code that just tries to load from multiple places.
Another approach would be to use Java's File class, which has functions for listing various files under a directory, then iterating through them and finding the file that you want. More info is available in the Java reference, but it might look something like this:
String[] array = null;
File dir = new File(sketchPath(""));
for(String file : dir.list()){
if(file.startsWith(yourFileNameHere)){
array = loadStrings(file);
break;
}
}
I haven't tested this code so you might have to play with it a little bit, but that's the basic idea. Of course, you might just want to rename your files ahead of time to avoid this problem.
Why not get the new filename from the error itself? To get the error statement into a String, we need to wrap loadStrings in a try and catch statement.
String[] array;
String file = "heLlo.txt";
try {
//if all is good then we load the file
array = loadStrings(file);
}catch(Exception e){
//otherwise when we get the error, we store it in a String
String error = e.toString();
Then we need to use regular expressions to get the filename from the error statement using match. The regex is /named ([^ +])/ (the filename can be assumed not to have any spaces in it).
String[]matches = match(error, "named ([^ ]+)");
The capture group with be in element 1 in the array containing the matches. So that would be the "real" filename,
String realFile = matches[1];
Finally we load the real file and store it in our array.
array = loadStrings(realFile);
}
Sure, if you want, you can put all of this into a function so that you won't have to use this code again and again every time you load a file. But obviously, it would just be easier if you just renamed or checked your filenames ahead in time.
I have a task to find out all the PDF files under several price list folders using JRuby on Windows 7. The folder structure is as follows:
WorkSpace/Data/2015/city1/A/...
WorkSpace/Data/2015/city1/B/...
WorkSpace/Data/2015/city1/Pricelist/...
WorkSpace/Data/2015/city1/...
WorkSpace/Data/2015/city1/Price List/.....
WorkSpace/Data/2015/city2/A/...
WorkSpace/Data/2015/city2/C/...
WorkSpace/Data/2015/city2/Pricelist/...
WorkSpace/Data/2015/city2/D/...
WorkSpace/Data/2015/city2/Price List/.....
WorkSpace/Data/2016/city1/folder1/...
WorkSpace/Data/2016/city1/folder2/...
WorkSpace/Data/2016/city1/Pricelist/...
WorkSpace/Data/2016/city1/folder3/...
WorkSpace/Data/2016/city1/folder4/Price List/...
WorkSpace/Data/2016/city2/folder1/...
WorkSpace/Data/2016/city2/folder2/...
WorkSpace/Data/2016/city2/Pricelist/...
WorkSpace/Data/2016/city2/folder3/...
WorkSpace/Data/2016/city2/folder4/Price List/...
... represents all kinds of files under their corresponding folder.
I only want to find the PDF files under folder Pricelist and Price List. How can I do this?
I read Searching a folder and all of its subfolders for files of a certain type. This is an answer which I think is helpful, but how can I modify the expression /.*\.pdf$/ to achieve my goal?
Use a Recursive Glob
All you need to find your files is Dir#glob and Enumerable#grep. For example:
Dir.glob('WorkSpace/Data/**/*.pdf').grep /Price List|Pricelist/
This will collect all the PDF files using a recursive glob pattern that descends into all subdirectories starting at Workspace/Data (adjust the path to this starting directory as needed), and then returns only the results that match the directories you're grepping for. In this case, we're using a regular expression pattern with alternation to find either of the two directories you're looking for, without regard to how deeply nested the desired directories might be.
There may be more efficient ways to do this, or you may need to tweak the regex if it's too permissive for you, but this certainly solves the problem without needing to know much more than the root of the directory tree you want to search.
You'll probably want to look at the Find module. The code would be something like this:
results = []
directory_list = []
Find.find('Workspace/Data') do |path|
if FileTest.directory?(path)
fn = File.basename(path)
if fn == 'Pricelist' || fn == 'Price List'
directory_list << path
Find.prune
end
end
end
directory_list.each do |starting_path|
Find.find(starting_path) do |path|
if File.extname(path) == '.pdf'
results << path
end
end
end
The first loop scans and finds all the directories that match the directory name condition, skipping scanning below them because that will happen in the second loop. The second loop takes each of the directories found by the first loop and scans them for files ending in the '.pdf' extension, adding each one to the results list.
You can hoist the second loop's body up into the first loop in place of directory_list << path, but the resulting code would be harder to read and wouldn't gain any performance improvement.
I've been writing a program in R that outputs randomization schemes for a research project I'm working on with a few other people this summer, and I'm done with the majority of it, except for one feature. Part of what I've been doing is making it really user friendly, so that the program will prompt the user for certain pieces of information, and therefore know what needs to be randomized. I have it set up to check every piece of user input to make sure it's a valid input, and give an error message/prompt the user again if it's not. The only thing I can't quite figure out is how to get it to check whether or not the file name for the .csv output is valid. Does anyone know if there is a way to get R to check if a string makes a valid windows file name? Thanks!
These characters aren't allowed: /\:*?"<>|. So warn the user if it contains any of those.
Some other names are also disallowed: COM, AUX, NUL, COM1 to COM9, LPT1 to LPT9.
You probably want to check that the filename is valid using a regular expression. See this other answer for a Java example that should take minimal tweaking to work in R.
https://stackoverflow.com/a/6804755/134830
You may also want to check the filename length (260 characters for maximum portability, though longer names are allowed on some systems).
Finally, in R, if you try to create a file in a directory that doesn't exist, it will still fail, so you need to split the name up into the filename and directory name (using basename and dirname) and try to create the directory first, if necessary.
That said, David Heffernan gives good advice in his comment to let Windows do the wok in deciding whether or not it can create the file: you don't want to erroneously tell the user that a filename is invalid.
You want something a little like this:
nice_file_create <- function(filename)
{
directory_name <- dirname(filename)
if(!file.exists(directory_name))
{
ok <- dir.create(directory_name)
if(!ok)
{
warning("The directory of that path could not be created.")
return(invisible())
}
}
tryCatch(
file.create(filename),
error = function(e)
{
warning("The file could not be created.")
}
)
}
But test it thoroughly first! There are all sorts of edge cases where things can fall over: try UNC network path names, "~", and paths with "." and ".." in them.
I'd suggest that the easiest way to make sure a filename is valid is to use fs::path_sanitize().
It removes control characters, reserved characters, and Windows-reserved filenames, truncating the string at 255 bytes in length.
I recently had an interview with a reputable company for the position of Software Developer and this was one of the questions asked:
"Given the following methods:
List subDirectories(String directoryName){ ... };
List filesInDirectory(String directoryName) { ... };
As the names suggest, the first method returns a list of names of immediate sub-directories in the input directory ('directoryName') and the second method returns a list of names of all files in this folder.
Print all the files in the file system."
I thought about it and gave the interview a pretty obvious recursive solution. She then told me to do it without recursion. Since recursion makes use of the call stack, I told her I will use an auxillary stack instead, at which point point she told me not to use a stack either. Unfortunately, I wasn't able to come up with a solution. I did ask how it can be done without recursion/stack, but she wouldn't say.
How can this be done?
You want to use a queue and a BFS algorithm.
I guess some pseudo-code would be nice:
files = filesInDirectory("/")
foreach (file in files) {
fileQ.append(file)
}
dirQ = subDirectories("/")
while (dirQ != empty) {
dir = dirQ.pop
files = filesInDirectory(dir)
foreach (file in files) {
fileQ.append(file)
}
dirQ.append(subDirectories(dir))
}
while (fileQ != empty) {
print fileQ.pop
}
If I understood correctly, immediate sub-directories are only the directories in that folder. I mean if I=we have these three paths /home/user, /home/config and /home/user/u001, we can say that both user and config are immediate subdirectories of /home/, but u001 isn't. The same applies if user, and u001 are files (user is immediate while u001 isn't).
So you don't really need recursion or stack to return a list of immediate subdirectories or files.
EDIT: I thought that the OP wanted to implement the subDirectories() and filesInDirectories() functions.
So, you can do something like to print all files (kind of pseudocode):
List subd = subDirectories(current_dir);
List files = filesInDirectories(current_dir);
foreach (file in files) {
print file.name();
}
while (!subd.empty()) {
dir = subd.pop();
files = filesInDirectory(dir.name());
foreach (file in files) {
print file.name();
}
subd.append(subDirectories(dir.path()));
}
I think that what #lqs suggests is indeed an acceptable answer that she might have been looking for: store the full path in a variable, and append the directory name to it if you enter a subdirectory, and clip off the last directory name when you leave it. This way, your full path acts as the pointer to where you currently are in the file system.
Because the full path is always modified at the end, the full path behaves (not surprisingly) as your stack.
Interview questions aside, I think I would still pick a real stack over string manipulation though...