Trimming specific line from .txt files [closed] - windows

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have a large number of text (.txt) files where I need to trim the first line. The file looks similar to this:
siteID:8741234DB
Source location: XXXXXXX
Backup Information: XXXXXX
SourceLocation: 4445656DB
I'm simply trying to remove the "DB" from the end of line 1.
I'm preferably trying to find the simplest solution via batch or powershell. Most solutions I've come across move the entire line, but as I mentioned I only need to trim the end of the first line. As an instance of "DB" may occur again in the file.
Thanks.

Provided the files are not too large, you can rely on Get-Content to read the files and Set-Content to update/rewrite the files:
Get-ChildItem -Path FilePath\*.txt | Foreach-Object {
if ((Get-Content -LiteralPath $_.FullName -TotalCount 1) -match 'DB$') { # check for DB at end of first line
$file = Get-Content $_.FullName # read file into array
$file[0] = $file[0].TrimEnd('DB') # update first line of array
$file | Set-Content $_.FullName # write array to file
}
}

Related

PowerShell - move all files with text content in limited number of lines [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I am looking for a command in PowerShell for finding and moving files that contain certain string.
I have a folder with thousands XML files. These XML files have same structure and each file contains over 1000 lines. So Select-String command will go through all the file content, which is unnecessary, because the String I am looking for is present on first 10 lines of the file.
So I would like to some how help the PowerShell to get result faster. (Recursive searching is needed).
So, I want to find those files (int folder file_source) and move them to another folder called destination. The searching pattern is "\s*A73" (without quotes) and I have use this command:
Get-ChildItem -path ./file_source -recurse | Select-String -list -pattern "<type>\s*A73" | move -dest ./destination
Thanks.
You have not provided any code samples of what you are trying to do. That leaves some things open for interpretation. With that said, you can do something like the following:
$RootDirectoryToCheck = 'some directory path'
$DestinationDirectory = 'some directory path'
$TextToFind = 'some text'
Get-ChildItem -Path $RootDirectoryToCheck -Filter '*.xml' -File -Recurse |
where {(Get-Content $_.FullName -TotalCount 10) -match $TextToFind} |
Move-Item -Destination $DestinationDirectory
Explanation:
Get-ChildItem contains a -Recurse parameter to recursively search starting from -Path. -File ensures the output only contains files.
Get-Content's parameter -TotalCount tells PowerShell to only read the first 10 lines of a file. -match is a regex matching operator that will return True or False if comparing a single string. When comparing a collection of strings, it will return the matched string on successful match or null for an unsuccessful match.
The matched files can then be piped into Move-Item. The -Destination parameter can be used to direct where to move the files.
I doubt this is faster, compared to reading first 10 lines:
(dir <SourcePath> -Recurse -File | Select-String -Pattern <SearchTerm> -List).Path | Move-Item -Destination <DestinationPath>
But what the heck, since I just spent the time realizing that Select-String can't be made recursive on its own...

Howto rename bunch of filenames which are calculated [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want rename bunch of filenames. The rename is based on the calculation of the filename. That means the actual filename + 3600 = new filename.
Important is that the underscore in the pid files have to stay.
Thanks in advance!
My system is Debian Stretch.
Actual Filename:
134235.error
134235_.pid
134235.tiff
13893.error
13893_.pid
13893.tiff
1.error
1_.pid
1.tiff
Rename to:
137835.error
137835_.pid
137835.tiff
17493.error
17493_.pid
17493.tiff
3601.error
3601_.pid
3601.tiff
for fname in *; do
echo mv -- "$fname" "${fname/*[[:digit:]]/$((${fname%%[^[:digit:]]*}+3600))}"
done
If everything looks ok, remove echo.
With Perl's standalone rename command. Some distributions call it prename.
rename -n 's/(\d+)(.+)/${\($1+3600)}$2/' *
If everything looks fine, remove -n.

My perl script doesn't see the txt file [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
My Perl program creates the file
10001.ICNTL.20160602.20160603.OPR.GAAP.PROD.PFI.PRE.txt
Then in method I have a code:
if ( -e $report ) {
# we parse the filet here is some code, at the end
{
else
}
print "*** Skipping \\NYNAS\NYNDS\VOL\DATA\INVACCT\FUND_RECS_PFI\10001.ICNTL.20160603.PROD.GAAP.PFI\10001.ICNTL.20160602.20160603.OPR.GAAP.PROD.PFI.PRE.TXT
}
I cannot understand why the script doesn't see the file. I've checked it several times letter by letter. Can it be because of the Upper case TXT, but in reality it is lower case?
Is your file 10001.ICNTL.20160602.20160603.OPR.GAAP.PROD.PFI.PRE.txt in directory \\NYNAS\NYNDS\VOL\DATA\INVACCT\FUND_RECS_PFI?
At a guess you're not escaping the file path correctly. Even if you use single-quotes, there is no way of representing the two leading backslashes in Uniform Naming Convention (UNC) paths without escaping at least one of them
Check the output of print $report, "\n" to see what you've really written
My preference is to use four backslashes at the start of the path string, like this
my $report = '\\\\NYNAS\NYNDS\VOL\DATA\INVACCT\FUND_RECS_PFI\10001.ICNTL.20160603.PROD.GAAP.PFI\‌​10001.ICNTL.20160602.20160603.OPR.GAAP.PROD.PFI.PRE.TXT';
print -e $report ? "Found\n" : "Not found\n";
And Perl allows you to use forward slashes in place of backslashes in a Windows path, so you could write this instead if you prefer, but paths like this aren't valid in other Windows software
my $report = '//NYNAS/NYNDS/VOL/DATA/INVACCT/FUND_RECS_PFI/10001.ICNTL.20160603.PROD.GAAP.PFI/‌​10001.ICNTL.20160602.20160603.OPR.GAAP.PROD.PFI.PRE.TXT';
Or another alternative is to relocate your current working directory. You cannot cd to a UNC path on the Windows command line, but Perl allows you to chdir successfully
chdir '//NYNAS/NYNDS/VOL/DATA/INVACCT/FUND_RECS_PFI' or die $!;
Thereafter all relative file paths will be relative to this new working directory on your networked system

Removing Headers from CSV using Windows Powershell [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a CSV file which has several thousand lines, every 59 lines there are headers which I need to remove, but I am unsure on how to do this. From what I have read the Powershell tool is a good option but the commands I have found do not seem to work as Powershell keeps spitting out errors referring to 'Grep' not being found.
Please could I have some pointers?
Thanks
This should be really simple.
Get-Content $File -ReadCount 59 | ForEach{$_|Select -Skip 1}
That reads the file in groups of 59 lines, and skips the first one of each group. Then either pipe it to Out-File or assign that to a variable, or whatever it is you want to do.
Ok, if the header line is the same you can just Get-Content and filter for lines that don't match the first line, then add the header back in as you said. This will create a new file with only the first line, then copy the original to it, filtering out lines that match that first line that's in the new file.
$Header = Get-Content $File | Select -First 1
$Header | Out-File C:\NewFile.csv
Get-Content $file | Where{!($_ -Match $Header)}| Out-File C:\NewFile.Csv -append

I want to delete a string from each line in a file [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a file whose contents are
/a/b/c
/a/b/d/xyz
/a/b/c/nmxnxlcs
...
I want to delete the string /a/b/ from the file.
I want to do it using shell script.
With sed:
sed 's#/a/b/##' file
if you want to update the file,
sed -i.bak 's#/a/b/##' file
it will create a backup file.bak and update file with the new values.
As mbratch comments in his answer, it can happen that you just want to replace lines starting with /a/b/. In that case, you can use:
sed 's#^/a/b/##' file
where ^ stands for beginning of line.
Test
$ cat a
/a/b/c
/a/b/d/xyz
/a/b/c/nmxnxlcs
hello/a/b/aa
$ sed 's#/a/b/##' a
c
d/xyz
c/nmxnxlcs
helloaa
Although it's not mentioned in the problem statement, the example suggests that you only want to delete the string if it's at the beginning of the line:
sed 's:^/a/b/::' myfile.txt
This will change:
/a/b/c/foo.txt
To:
c/foo.txt
And will change:
/a/b/c/x/a/b/foo.txt
To:
c/x/a/b/foo.txt
And not to:
c/xfoo.txt

Resources