Removing Headers from CSV using Windows Powershell [closed] - windows

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a CSV file which has several thousand lines, every 59 lines there are headers which I need to remove, but I am unsure on how to do this. From what I have read the Powershell tool is a good option but the commands I have found do not seem to work as Powershell keeps spitting out errors referring to 'Grep' not being found.
Please could I have some pointers?
Thanks

This should be really simple.
Get-Content $File -ReadCount 59 | ForEach{$_|Select -Skip 1}
That reads the file in groups of 59 lines, and skips the first one of each group. Then either pipe it to Out-File or assign that to a variable, or whatever it is you want to do.
Ok, if the header line is the same you can just Get-Content and filter for lines that don't match the first line, then add the header back in as you said. This will create a new file with only the first line, then copy the original to it, filtering out lines that match that first line that's in the new file.
$Header = Get-Content $File | Select -First 1
$Header | Out-File C:\NewFile.csv
Get-Content $file | Where{!($_ -Match $Header)}| Out-File C:\NewFile.Csv -append

Related

What am I doing wrong with Out-File? [duplicate]

This question already has answers here:
redirecting test-path output to text file
(2 answers)
Find lines with specific characters recursively
(1 answer)
Closed 2 months ago.
I cannot get this command to output to text file. I am trying to use the out-file command.
#Trim up the URL
$data = get-content C:\Dell\Temp\urlhausCopyDL.txt
$data | ForEach-Object {
$items = $_.split(":")
write-host (-join('http:',$items[1])) | Out-File "C:\Dell\Temp\Formulated.txt" -Append
}
It creates the file but it is blank.
It splits the URL and removes whatever is after the second : as we don't need it.
It outputs to Console great, but I just cant get it to write to a file!! :(
A snippet of urlhauscopyDL is here for you:
http://115.55.196.162:57955/bin.sh
http://182.240.54.209:60488/bin.sh
http://176.231.66.63:49310/.i
Thankyou For your help team :)

Trimming specific line from .txt files [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have a large number of text (.txt) files where I need to trim the first line. The file looks similar to this:
siteID:8741234DB
Source location: XXXXXXX
Backup Information: XXXXXX
SourceLocation: 4445656DB
I'm simply trying to remove the "DB" from the end of line 1.
I'm preferably trying to find the simplest solution via batch or powershell. Most solutions I've come across move the entire line, but as I mentioned I only need to trim the end of the first line. As an instance of "DB" may occur again in the file.
Thanks.
Provided the files are not too large, you can rely on Get-Content to read the files and Set-Content to update/rewrite the files:
Get-ChildItem -Path FilePath\*.txt | Foreach-Object {
if ((Get-Content -LiteralPath $_.FullName -TotalCount 1) -match 'DB$') { # check for DB at end of first line
$file = Get-Content $_.FullName # read file into array
$file[0] = $file[0].TrimEnd('DB') # update first line of array
$file | Set-Content $_.FullName # write array to file
}
}

PowerShell - move all files with text content in limited number of lines [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I am looking for a command in PowerShell for finding and moving files that contain certain string.
I have a folder with thousands XML files. These XML files have same structure and each file contains over 1000 lines. So Select-String command will go through all the file content, which is unnecessary, because the String I am looking for is present on first 10 lines of the file.
So I would like to some how help the PowerShell to get result faster. (Recursive searching is needed).
So, I want to find those files (int folder file_source) and move them to another folder called destination. The searching pattern is "\s*A73" (without quotes) and I have use this command:
Get-ChildItem -path ./file_source -recurse | Select-String -list -pattern "<type>\s*A73" | move -dest ./destination
Thanks.
You have not provided any code samples of what you are trying to do. That leaves some things open for interpretation. With that said, you can do something like the following:
$RootDirectoryToCheck = 'some directory path'
$DestinationDirectory = 'some directory path'
$TextToFind = 'some text'
Get-ChildItem -Path $RootDirectoryToCheck -Filter '*.xml' -File -Recurse |
where {(Get-Content $_.FullName -TotalCount 10) -match $TextToFind} |
Move-Item -Destination $DestinationDirectory
Explanation:
Get-ChildItem contains a -Recurse parameter to recursively search starting from -Path. -File ensures the output only contains files.
Get-Content's parameter -TotalCount tells PowerShell to only read the first 10 lines of a file. -match is a regex matching operator that will return True or False if comparing a single string. When comparing a collection of strings, it will return the matched string on successful match or null for an unsuccessful match.
The matched files can then be piped into Move-Item. The -Destination parameter can be used to direct where to move the files.
I doubt this is faster, compared to reading first 10 lines:
(dir <SourcePath> -Recurse -File | Select-String -Pattern <SearchTerm> -List).Path | Move-Item -Destination <DestinationPath>
But what the heck, since I just spent the time realizing that Select-String can't be made recursive on its own...

Unique words in a text file [duplicate]

This question already has answers here:
Parsing unique words from a text file
(2 answers)
Closed 6 years ago.
I was wondering if there is a way to find (and display) all the unique words (words that appear once) in a text file? Could this be done just using the command line? Or would I have to use something like a python script?
If you don't want to write an application then the easiest way that I can think to accomplish this is to use powershell. See this:
https://msdn.microsoft.com/en-us/powershell/reference/5.1/microsoft.powershell.utility/get-unique
The example that Microsoft provides populates a variable with the list of unique words:
$A = $(foreach ($line in Get-Content C:\Test1\File1.txt) {$line.tolower().split(" ")}) | sort | Get-Unique
You may wish you use additional delimiters though to split on punctuation such as this:
$A = $(foreach ($line in Get-Content C:\test.txt) {$line.tolower().split(" .,?!;:")}) | sort | Get-Unique
Place this in a file with the extension .ps1 and you can run it from the command line. In order to get the values out of the variable just a second line with the variable to echo the result to the screen:
$A
To get the count of items in the array you could do this:
$A.count

Unix header and footer matching patterns condition [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I need to move my text files from processing folder to backup folder by reading the files in directory, each text file contains header, footer and other records. before moving to backup I need check that header should start with 01 and footer should start 99. If the condition satisfies i should move otherwise skip the current file and continue with other files. How to write a condition to check the 1st line should strat with 01 and last line should start with 99.
Please help me..Thanks in advance.
Sreeni
Try below:
cat file | head -1 | grep "^01" #check the first line start with 01
cat file | tail -1 | grep "^99" #check the last line start with 99
If "^" doesn't work just replace it with "/>". Both mean starting with.
you can use awk to do it, first write a awk script,e.g. t.awk
NR==1{if($1~/^01/)print}
END{if($1~/^99/)print}
and then,use awk -f t.awk your_file_name
hope to help you.

Resources