Use linq to query blocks of text? - linq

I have a text file that contains some data in a "block" format:
source : source location
filename : somefile.txt
vendor : somevendor
version : xx.xx.xxx
source : source location2
filename : somefile2.txt
vendor : somevendor2
version : yy.yy.yyy
can I use Linq to query this data and if so how would you go about it? I have used linq to query lines of data from text file many times, but never a "block" of data as above. Thanks for the input.

Yes, you can use LINQ, this approach is not optimized much if you have large file. Below is how to get data:
var lines = File.ReadLines("C:\\text.txt")
.Where(line => !string.IsNullOrWhiteSpace(line))
.ToList();
for (int i = 0; i < lines.Count; i += 4)
{
var location = lines[i].Split(':')[1];
var fileName = lines[i + 1].Split(':')[1];
var vendor = lines[i + 2].Split(':')[1];
var version = lines[i + 3].Split(':')[1];
}
Version to use LINQ:
var result = Enumerable.Range(0, lines.Count()/4).Select(i => new {
location = lines[4*i].Split(':')[1];
fileName = lines[4*i + 1].Split(':')[1];
vendor = lines[4*i + 2].Split(':')[1];
version = lines[4*i + 3].Split(':')[1];
});

Related

Magento 2 - wrong number of digits between separators for prices in Indian Rupees

I am using Magento 2.2.3. my default currency is INR, but it shows in the wrong format:
But it should be ₹77,65,000.00. How do we correct price format? Currently its wrong... like USD.
You can set the currency format by following code.
<?php
$objectManager = \Magento\Framework\App\ObjectManager::getInstance(); // Instance of Object Manager
$priceHelper = $objectManager->create('Magento\Framework\Pricing\Helper\Data'); // Instance of Pricing Helper
$price = 1000; //Your Price
$formattedPrice = $priceHelper->currency($price, true, false);
?>
File path: vendor/magento/zendframework1/library/Zend/Locale/Data/en.xml
On line number 3353, under section currencyFormat and type = "standard", change the pattern from <pattern>¤#,##0.00</pattern> to <pattern>¤ #,##,##0.00</pattern>
Still, on PDP page and cart page summary the price format does not change because the prize format is coming from the JS in which Magento using a RegExp function for only US price format.
For that, please change the code in the below file.
File path: vendor/magento/module-catalog/view/base/web/js/price-utils.js (First extend this file in your theme directory and do the respected changes)
Under the function formatPrice below this line comment all the line in the respective function.
i = parseInt(
amount = Number(Math.round(Math.abs(+amount || 0) + 'e+' + precision) + ('e-' + precision)),
10
) + '';
And add this set of code below the above line.
var x=i;
x=x.toString();
var afterPoint = '';
if(x.indexOf('.') > 0)
afterPoint = x.substring(x.indexOf('.'),x.length);
x = Math.floor(x);
x=x.toString();
var lastThree = x.substring(x.length-3);
var otherNumbers = x.substring(0,x.length-3);
if(otherNumbers != '')
lastThree = ',' + lastThree;
var response = otherNumbers.replace(/\B(?=(\d{2})+(?!\d))/g, ",") + lastThree + afterPoint;
return pattern.replace('%s', response);
deploy and `rm -rf var/cache/*
And then you're done. For Example: A price previously displayed like 453,453, will now display in the Indian manner like 4,53,453.

Context Free Grammar for English Sounding Names

I am currently writing an application that will generate random data; specifically, random names. I have made some decent progress, but am not satisfied with many of the generated names. The problem lies in my production rules, which I've attached to the bottom of this post.
The basic idea is: consonant, vowel, consonant, vowel, but some consonants themselves map to vowels (such as b< VO >).
I have not fully created the rules yet, but the final idea would follow the format shown below. However, rather than finishing it, I would like to make a better basis for the production rules.
I have tried to find a reference that discusses either: a CFG already created for English-sounding words, or an English reference that disassembles the basic format of letter combinations for words. Unfortunately, I have not been able to find a useful resource to help me advance farther than I already have. Does anyone know of a place I should look, or a reference I can look at?
ALSO: in your opinion, do you believe a context-sensitive grammar might work better?
//the following will deal with single vowels and consonants
var CO = ['b','c','d','f','g','h','j','k','l','m','n','p','qu','r','s','t','v','w','x','y','z'];
CO.probabilities = [2.41,4.49,6.87,3.59,3.25,9.84,0.24,1.24,6.5,3.88,10.9,3.11,0.153,9.67,10.2,14.6,1.58,3.81,0.242,3.19,0.12];
CO.name = "CO";
var VO = ['a','e','i','o','u'];
VO.probabilities = [21.43,33.33,18.28,19.7,7.23];
VO.name = "VO";
var LETTER = ['<VO>','<CO>'];
LETTER.probabilities = [38.1,61.9];
LETTER.name = "LETTER";
//the following deal with connsonant pairs
var BH = ['c','p','r','s','t']; //the fisrt part of a th, ph, sh, pair (before H)
BH.probabilities = [20,10,20,25,25];
BH.name = "BH";
var BL = ['b','c','f','g','p','s']; //before letter l
BL.probabilities = [10,20,10,10,25,25]
BL.name = "BL";
var COP = ['<BH>h','<BL>l'] //consonant pairs
COP.probabilities = [50,50];
COP.name = "COP";
//this is a generic syllable, that does not take grammar rules into consideration
var SYL = ['<CO><VO>','<VO><CO>','<CO><VO><VO>'];
SYL.probabilities = [50,20,30];
SYL.name = "SYL";
//the following deal with mid word syllablse
var CLOSED = ['<CO><VO><CO>','<CO><VO><CO><CO>'];
CLOSED.probabilities = [75,25];
CLOSED.name = "CLOSED";
var OPEN = ['<CO><VO>','<CO><CO><VO>'];
OPEN.probabilities = [60,40];
OPEN.name = "OPEN";
var VR = ['<VO>r']; //vowel-r
VR.probabilities = [100];
VR.name = "VR";
var MID = ['<CLOSED>','<OPEN>','<VR>'];
MID.probabilities = [33,33,33];
MID.name = "MID";
//the following will deal with ending syllables
var VCE = ['<VO><CO>e','<LETTER><VO><CO>e'];
VCE.probabilities = [75,25];
VCE.name = "VCE";
var CLE = ['<CO>le'];
CLE.probabilities = [100];
CLE.name = "CLE";
var OE = ['tion','age','ive']; //other endings
OE.probabilities = [33,33,33];
OE.name = "OE";
var ES = ['<VCE>','<CLE>','<OE>','<VR>']; //contains all ending syllables
ES.probabilities = [40,40,20];
ES.name = "ES";
var rules = [CO,VO,BH,BL,COP,LETTER,SYL,CLOSED,OPEN,VR,MID,VCE,CLE,OE,ES];
//These are some highly-defined production rules
var streetSuffix = ['road','street','way','avenue','drive','grove','lane','gardens','place','crescent','close','square','hill','circus','mews','vale','rise','mead'];
streetSuffix.probabilities = [15,15,5,10,5,2.7,2.7,2.7,2.7,2.7,2.7,2.7,2.7,2.7,2.7,2.7,2.7,2.7];
var states = ['Alabama','Alaska','American Samoa','Arizona','Arkansas','California','Colorado','Connecticut','Delaware','Florida','Georgia','Guam','Hawaii','Idaho','Illinois','Indiana','Iowa','Kansas','Kentucky','Louisiana','Maine','Marshall Islands','Maryland','Massachusetts','Michigan','Minnesota','Mississippi','Missouri','Montana','Nebraska','Nevada','New Hampshire','New Jersey','New Mexico','New York','North Carolina','North Dakota','Ohio','Oklahoma','Oregon','Palau','Pennsylvania','Puerto Rico','Rhode Island','South Carolina','South Dakota','Tennessee','Texas','Utah','Vermont','Virgin Island','Virginia','Washington','West Virginia','Wisconsin','Wyoming'];
var cityNewWordSuffix = ['city','town',''];
var cityEndWordSuffix = ['polis','ville','ford','furt','forth','shire','berg','gurg','borough','brough','field','kirk','bury','stadt',''];
var siteSuffix = ['com','org','net','edu'];
/**
This will generate a random name of Length length
*/
function generateRandomName() {
//string will be random length of CO VO pattern for now
var result;
result = "<COP><VO><MID><VO><ES>";
while (hasNonTerminal(result)) {
result = replaceFirstNonTerminal(result);
}
return result;
}
Here are a few words generated by the machine in its current state:
"cheiroene",
"sloeraase",
"sledehgeute",
"rhaorenone",
"rheerisute",
"chaereehe",
"sletraoege",
"sluureese",
"chaheyleete",
"chierauhe",
"ploclooate",
"glawofhaice",
"thanisgoage",
"slelaodose",
"blaereode",
"shihudeife",
"slaereene",
"pleheaele",
"rhepicsaile",
"ploeruoge",
"sliareuhe",
"thaereafe",
"thaaraeke",
"cheoreate",
"shofetniote",
"phiraoese",
"clilniueye",
"slepceikede",
"cligloueohe",
"phitleoime",

NullReferenceException Error when trying to iterate a IEnumerator

I have a datatable and want to select some records with LinQ in this format:
var result2 = from row in dt.AsEnumerable()
where row.Field<string>("Media").Equals(MediaTp, StringComparison.CurrentCultureIgnoreCase)
&& (String.Compare(row.Field<string>("StrDate"), dtStart.Year.ToString() +
(dtStart.Month < 10 ? '0' + dtStart.Month.ToString() : dtStart.Month.ToString()) +
(dtStart.Day < 10 ? '0' + dtStart.Day.ToString() : dtStart.Day.ToString())) >= 0
&& String.Compare(row.Field<string>("StrDate"), dtEnd.Year.ToString() +
(dtEnd.Month < 10 ? '0' + dtEnd.Month.ToString() : dtEnd.Month.ToString()) +
(dtEnd.Day < 10 ? '0' + dtEnd.Day.ToString() : dtEnd.Day.ToString())) <= 0)
group row by new { Year = row.Field<int>("Year"), Month = row.Field<int>("Month"), Day = row.Field<int>("Day") } into grp
orderby grp.Key.Year, grp.Key.Month, grp.Key.Day
select new
{
CurrentDate = grp.Key.Year + "/" + grp.Key.Month + "/" + grp.Key.Day,
DayOffset = (new DateTime(grp.Key.Year, grp.Key.Month, grp.Key.Day)).Subtract(dtStart).Days,
Count = grp.Sum(r => r.Field<int>("Count"))
};
and in this code, I try to iterate it with the following code:
foreach (var row in result2)
{
//... row.DayOffset.ToString() + ....
}
this issue occurred :
Object reference not set to an instance of an object.
I think it happens when there's no record with above criteria.
I tried to change it to enumerator like this , and use MoveNext() to check the data is on that or not:
result2.GetEnumerator();
if (enumerator2.MoveNext()) {//--}
but still the same error.
whats the problem?
I guess in one or more rows Media is null.
You then call Equals on null, which results in a NullReferenceException.
You could add a null check:
var result2 = from row in dt.AsEnumerable()
where row.Field<string>("Media") != null
&& row.Field<string>("Media").Equals(MediaTp, StringComparison.CurrentCultureIgnoreCase)
...
or use a surrogate value like:
var result2 = from row in dt.AsEnumerable()
let media = row.Field<string>("Media") ?? String.Empty
where media.Equals(MediaTp, StringComparison.CurrentCultureIgnoreCase)
...
(note that the last approach is slightly different)

SharePoint 2013 - Sorting Search Results not working (KeywordQuery-SortList)

am using KeywordQuery to search and.. the SortList does not affect result, it is always return first 5 results. Any suggestion? The code is bellow...
using (KeywordQuery query = new KeywordQuery(site))
{
var fedManager = new FederationManager(application);
var owner = new SearchObjectOwner(SearchObjectLevel.SPSite, site.RootWeb);
query.SourceId = fedManager.GetSourceByName("NewsRS", owner).Id;
query.QueryText = string.Format("WorkflowStatusOWSCHCS:Approved PublishedUntilDate>=\"{0}\" OR NewsNewsPublishedDate<=\"{0}\"", DateTime.Now);
query.KeywordInclusion = KeywordInclusion.AllKeywords;
query.RowLimit = 5;
query.StartRow = 1;
query.SelectProperties.Add("NewsFriendlyUrl");
query.SelectProperties.Add("NewsNewsTeaser");
query.SelectProperties.Add("NewsNewsDate");
query.SelectProperties.Add("NewsPublishedUntilDate");
query.SelectProperties.Add("NewsNewsContent");
query.SelectProperties.Add("NewsNewsPublishedDate");
query.SelectProperties.Add("NewsNewsImage");
query.SortList.Add("NewsNewsDate", SortDirection.Descending);
var searchExecutor = new SearchExecutor();
var myResults = searchExecutor.ExecuteQuery(query);
}
}
... the NewsNewsDate is marked as Sortable
query.RowLimit = 5; => You are explicitly specifying the Rowlimit to be 5. That is why it returns the first 5 results always. Change the rowlimit and set it to the number of results you need.

TFS Meltdown - How can I recover shelved changes

I had my working folder set to a RAM drive. During the night there was an extended power outage, the UPS ran out and my machine went down. Thankfully I shelved my changes before I went home and that shelveset is visible in Team Explorer. The changeset includes the project file and some new files which have not yet been added to source control.
I'm attempting to recover the affected files but am getting errors:
Attempting to view the shelved files gives TF10187 (or a general, unnumbered) The system cannot find the file specified even though I can see them in the Pending Changes list.
Attempting to unshelve the set in its entirety gives errors relating to incompatible changes which I can't resolve.
I'm guessing TFS cached the shelveset locally on the RAM disc which has since reinitialised itself and therefore lost the cache, but I'm hoping I'm wrong.
Can anyone assist?
I had someone come to me and ask the same question yesterday, fortunately they had a backup of the TFS Project database (tfs_) so we restored that to another database and I poked around and figured it out (so, if you have a backup then yes, you can recover all the files).
First of all a little info on the tables in the database.
A Shelveset can be identified by querying the tbl_Workspace table and looking for all records with Type=1 (Shelveset), you can of course also filter by name with the WorkspaceName column.
The other tables of interest are:
tbl_PendingChanges (which references the WorkspaceId from tbl_Workspace) - which files are part of the ShelveSet
tbl_VersionedItem (linked via ItemId column to tbl_PendingChanges) - parent path and name of files
tbl_Content (linked via FileId to PendingChanges) - this is where your file content is stored in as compressed (gzip) data
Now for the solution; the following query can show you your files:
SELECT c.[CreationDate], c.[Content], vi.[ChildItem], vi.ParentPath
FROM [dbo].[tbl_Content] c
INNER JOIN [dbo].[tbl_PendingChange] pc ON pc.FileId = c.FileId
INNER JOIN [dbo].[tbl_Workspace] w ON w.WorkspaceId = pc.WorkspaceId
INNER JOIN [dbo].[tbl_VersionedItem] vi ON vi.ItemId = pc.ItemId
WHERE w.WorkspaceName = '<YOUR SHELVESET NAME>'
With that I wrote some code to get the data back from SQL and then decompress the content with the GZipStream class and save the files off to disk.
A week of work was back in an hour or so.
This was done with TFS 2010.
Hope this helps!
Here is an updated response for TFS2015, which had another schema change. Below is a C# Console application for writing the txt files to Desktop. Make sure to fill in connString and shelvesetName variables.
using System;
using System.Data;
using System.Data.SqlClient;
using System.IO;
using System.IO.Compression;
namespace RestoreTFSShelve
{
internal class Program
{
private static void Main(string[] args)
{
string shelvesetName = "";
string connString = "";
SqlConnection cn = new SqlConnection(connString);
SqlCommand cmd = new SqlCommand(#"
SELECT c.[CreationDate], c.[Content], v.FullPath
FROM [dbo].[tbl_Content] c
INNER JOIN [dbo].tbl_FileMetadata f ON f.ResourceId = c.ResourceId
INNER JOIN [dbo].tbl_FileReference b ON f.ResourceId = b.ResourceId
INNER JOIN [dbo].[tbl_PendingChange] pc ON pc.FileId = b.FileId
INNER JOIN [dbo].[tbl_Workspace] w ON w.WorkspaceId = pc.WorkspaceId
INNER JOIN [dbo].[tbl_Version] v ON v.ItemId = pc.ItemId AND v.VersionTo = 2147483647
WHERE w.WorkspaceName = '#ShelvesetName'", cn);
cmd.Parameters.AddWithValue("#ShelvesetName", shelvesetName);
DataTable dt = new DataTable();
new SqlDataAdapter(cmd).Fill(dt);
foreach (DataRow row in dt.Rows)
{
string[] arrFilePath = row[2].ToString().Split('\\');
string fileName = arrFilePath[arrFilePath.Length - 2];
byte[] unzippedContent = Decompress((byte[])row[1]);
File.WriteAllBytes(Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Desktop), fileName), unzippedContent);
}
}
private static byte[] Decompress(byte[] gzip)
{
using (GZipStream stream = new GZipStream(new MemoryStream(gzip), CompressionMode.Decompress))
{
const int size = 4096;
byte[] buffer = new byte[size];
using (MemoryStream memory = new MemoryStream())
{
int count = 0;
do
{
count = stream.Read(buffer, 0, size);
if (count > 0)
{
memory.Write(buffer, 0, count);
}
}
while (count > 0);
return memory.ToArray();
}
}
}
}
}
I had something similar happen to me with a TFS 2012 instance. My SQL query was a bit different since the schema changed for TFS 2012. Hope this helps someone.
SELECT c.[CreationDate], c.[Content], v.FullPath
FROM [dbo].[tbl_Content] c
INNER JOIN [dbo].[tbl_File] f ON f.ResourceId = c.ResourceId
INNER JOIN [dbo].[tbl_PendingChange] pc ON pc.FileId = f.FileId--c.FileId
INNER JOIN [dbo].[tbl_Workspace] w ON w.WorkspaceId = pc.WorkspaceId
INNER JOIN [dbo].[tbl_Version] v ON v.ItemId = pc.ItemId AND v.VersionTo = 2147483647
WHERE w.WorkspaceName = #ShelvesetName
2147483647 seems to be 2^32 - 1 which I think may stand for "latest" in TFS 2012. I then also wrote a C# widget to decompress the Gzip-encoded stream and dump it to disk with the proper file name. I am not preserving hierarchy.
string cnstring = string.Format("Server={0};Database={1};Trusted_Connection=True;", txtDbInstance.Text, txtDbName.Text);
SqlConnection cn = new SqlConnection(cnstring);
SqlCommand cmd = new SqlCommand(#"
SELECT c.[CreationDate], c.[Content], v.FullPath
FROM [dbo].[tbl_Content] c
INNER JOIN [dbo].[tbl_File] f ON f.ResourceId = c.ResourceId
INNER JOIN [dbo].[tbl_PendingChange] pc ON pc.FileId = f.FileId--c.FileId
INNER JOIN [dbo].[tbl_Workspace] w ON w.WorkspaceId = pc.WorkspaceId
INNER JOIN [dbo].[tbl_Version] v ON v.ItemId = pc.ItemId AND v.VersionTo = 2147483647
WHERE w.WorkspaceName = #ShelvesetName", cn);
cmd.Parameters.AddWithValue("#ShelvesetName", txtShelvesetName.Text);
DataTable dt = new DataTable();
new SqlDataAdapter(cmd).Fill(dt);
listBox1.DisplayMember = "FullPath";
listBox1.ValueMember = "FullPath";
listBox1.DataSource = dt;
if(!Directory.Exists(txtOutputLocation.Text)) { Directory.CreateDirectory(txtOutputLocation.Text); }
foreach (DataRow row in dt.Rows)
{
string[] arrFilePath = row[2].ToString().Split('\\');
string fileName = arrFilePath[arrFilePath.Length - 2];
byte[] unzippedContent = Decompress((byte[])row[1]);
File.WriteAllBytes(Path.Combine(txtOutputLocation.Text, fileName), unzippedContent);
}
}
static byte[] Decompress(byte[] gzip)
{
using(GZipStream stream = new GZipStream(new MemoryStream(gzip), CompressionMode.Decompress))
{
const int size = 4096;
byte[] buffer = new byte[size];
using(MemoryStream memory = new MemoryStream())
{
int count = 0;
do
{
count = stream.Read(buffer, 0, size);
if(count > 0)
{
memory.Write(buffer, 0, count);
}
}
while(count > 0);
return memory.ToArray();
}
}
}

Resources