I wrote a code to update the Lettering of the first name in Zoho but it's not working - zoho

Here's the deluge script to capitalize the first letter of the sentence and make the other letters small that isn't working:
a = zoho.crm.getRecordById("Contacts",input.ID);
d = a.get("First_Name");
firstChar = d.subString(0,1);
otherChars = d.removeFirstOccurence(firstChar);
Name = firstChar.toUppercase() + otherChars.toLowerCase();
mp = map();
mp.put("First_Name",d);
b = zoho.crm.updateRecord("Contacts", Name,{"First_Name":"Name"});
info Name;
info b;
I tried capitalizing the first letter of the alphabet and make the other letters small. But it isn't working as expected.

Try using concat
Name = firstChar.toUppercase().concat( otherChars.toLowerCase() );

Try removing the double-quotes from the Name value in the the following statement. The reason is that Name is a variable holding the case-adjusted name, but "Name" is the string "Name".
From:
b = zoho.crm.updateRecord("Contacts", Name,{"First_Name":"Name"});
To
b = zoho.crm.updateRecord("Contacts", Name,{"First_Name":Name});

Related

How can I filter() stream method using regexp and predicate to get negated list

I am trying to filter anything not in the regexp.
So what I am trying to express is write anything to a list that has characters other than a-z,0-9 and -, so I can deal with these city names with invalid characters afterwards.
But whatever I try I either end up with a list of valid cities or an IllegalArgumentException where the list contains valid character cities.
String str;
List<String> invalidCharactersList = cityName.stream()
.filter(Pattern.compile("[^a-z0-9-]*$").asPredicate())
.collect(toList());
// Check for invalid names
if (!invalidCharactersList.isEmpty()) {
str = (inOut) ? "c" : "q";
throw new IllegalArgumentException("City name characters "
+ str + ": for city name " + invalidCharactersList.get(0)
+ ": fails constraint city names [a-z, 0-9, -]");
}
I am try to filter anything not in the regexp
Following is some test data which fails on the first list, I want it to fail on last
List<String> c = new ArrayList<>(Arrays.asList("fastcity", "bigbanana", "xyz"));
List<Integer> x = new ArrayList<>(Arrays.asList(23, 23, 23));
List<Integer> y = new ArrayList<>(Arrays.asList(1, 10, 20));
List<String> q = new ArrayList<>(Arrays.asList("fastcity*", "bigbanana", "xyz&"));
Following is output:
#Holger
filter(Pattern.compile("[^a-z0-9-]").asPredicate())
Thanks this works fine.

The letter disapperaed after Splitting string in my ruby program

I am newbie in ruby. In my ruby program, there is a part of code for parsing geocode. The code is like below:
string = "GPS:3;S23.164865;E113.428970;88"
info = string.tr("GPS:",'')
info_array = info.split(";")
puts "GPS: #{info_array[0]},#{info_array[1]},#{info_array[2]}"
The code should split the string into 3 piece: 3, S23.164865 and E113.428970;88 and the expected output is
GPS: 3,S23.164865,E113.428970
but the result is:
GPS: 3,23.164865,E113.428970
Yes, the 'S' letter disappered...
If I use
string = "GPS:3;N23.164865;E113.428970;88"
info = string.tr("GPS:",'')
info_array = info.split(";")
puts "GPS: #{info_array[0]},#{info_array[1]},#{info_array[2]}"
, it prints expected result
GPS: 3,N23.164865,E113.428970
I am very confused why this happens. Can you help?
It looks like you were expecting String#tr to behave like String#gsub.
Calling string.tr("GPS:", '') does not replace the complete string "GPS:" with the empty string. Instead, it replaces any character from within the string "GPS:" with an empty string. Commonly you will find .tr() called with an equal number of input and replacement characters, and in that case the input character is replaced by the output character in the corresponding position. But the way you have called it with only the empty string '' as its translation argument, will delete any of G, P, S, : from anywhere within the string.
>> "String with S and G and a: P".tr("GPS:", '')
=> "tring with and and a "
Instead, use .gsub('GPS:', '') to replace the complete match as a group.
string = "GPS:3;S23.164865;E113.428970;88"
info = string.gsub('GPS:', '')
info_array = info.split(";")
puts "GPS: #{info_array[0]},#{info_array[1]},#{info_array[2]}"
# prints
GPS: 3,S23.164865,E113.428970
Here we've called .gsub() with a string argument. It is probably more often called with a regexp search match argument though.

String Splitting in Foxpro Visual 9

I have a column of strings separated by comma.
Example: City, Zipcode
I want to make a column with only city populated so everything before the comma.
How has anyone else accomplished this? I know with Foxpro you can usually accomplish the same task various ways. Any help would be appreciated.
EDIT: SOLUTION
GETWORDNUM(FIELD,1,",")
This worked to give the text string before the comma from the column FIELD.
The easiest way to do that is to use STREXTRACT(). ie:
lcColumnData = "City, Zipcode"
? STREXTRACT(m.lcColumnData, "",",")
STORE ALINES(aCZ, "Atlanta, 30301", ",") TO iCZ
City = aCZ[1]
ZipCode = aCZ[2]
?City
?ZipCode
Try this:
str = "City, Zipcode"
*initialize the column value leftcol
leftcol = ''
*find comma position
pos = At(',', str)
Do Case
Case pos > 1
* there is a comma and something before that. take everything before that pos
leftcol = Left(str, pos-1)
Case pos = 1
* first char is comma
leftcol = ''
Otherwise
*there is no comma. take the whole string
leftcol = str
EndCase

How to reverse tokenization after running tokens through name finder?

After using NameFinderME to find the names in a series of tokens, I would like to reverse the tokenization and reconstruct the original text with the names that have been modified. Is there a way I can reverse the tokenization operation in the exact way in which it was performed, so that the output is the exact structure as the input?
Example
Hello my name is John. This is another sentence.
Find sentences
Hello my name is John.
This is another sentence.
Tokenize sentences.
> Hello
> my
> name
> is
> John.
>
> This
> is
> another
> sentence.
My code that analyzes the tokens above looks something like this so far.
TokenNameFinderModel model3 = new TokenNameFinderModel(modelIn3);
NameFinderME nameFinder = new NameFinderME(model3);
List<Span[]> spans = new List<Span[]>();
foreach (string sentence in sentences)
{
String[] tokens = tokenizer.tokenize(sentence);
Span[] nameSpans = nameFinder.find(tokens);
string[] namedEntities = Span.spansToStrings(nameSpans, tokens);
//I want to modify each of the named entities found
//foreach(string s in namedEntities) { modifystring(s) };
spans.Add(nameSpans);
}
Desired output, perhaps masking the names that were found.
Hello my name is XXXX. This is another sentence.
In the documentation, there is a link to this post describing how to use the detokenizer. I don't understand how the operations array relates to the original tokenization (if at all)
https://issues.apache.org/jira/browse/OPENNLP-216
Create instance of SimpleTokenizer.
String sentence = "He said \"This is a test\".";
SimpleTokenizer instance = SimpleTokenizer.INSTANCE;
Tokenize the sentence using tokenize(String str) method from SimpleTokenizer
String tokens[] = instance.tokenize(sentence);
The operations array must have the same number of operation name as tokens array. Basically array length should be equal.
Store the operation name N-times (tokens.length times) into operation array.
Operation operations[] = new Operation[tokens.length];
String oper = "MOVE_RIGHT"; // please refer above list for the list of operations
for (int i = 0; i < tokens.length; i++)
{ operations[i] = Operation.parse(oper); }
System.out.println(operations.length);
Here the operation array length will be equal to the tokens array length.
Now create an instance of DetokenizationDictionary by passing tokens and operations arrays to the constructor.
DetokenizationDictionary detokenizeDict = new DetokenizationDictionary(tokens, operations);
Pass DetokenizationDictionary instance to the DictionaryDetokenizer class to detokenize the tokens.
DictionaryDetokenizer dictDetokenize = new DictionaryDetokenizer(detokenizeDict);
DictionaryDetokenizer.detokenize requires two parameters. a). tokens array and b). split marker
String st = dictDetokenize.detokenize(tokens, " ");
Output:
Use the Detokenizer.
String text = detokenize(myTokens, null);

Regex to match value only once in text value

I am dealing with a dirty data source that has some key value pairs I have to extract. for example:
First Name = John Last Name = Smith Home Phone = 555-333-2345 Work Phone = Email = john.doe#email.com Zip From = 11772 Zip To = 11782 First Name = John First Name = John
To extract the First Name, I am using this regular expression:
/First Name = ([a-zA-Z]*)/
How do I prevent multiple matches in the case where the First Name is duplicated as shown above?
Here is a version of this on Rubular.
match will only get the first match (you would use scan to get all):
str.match(/First Name = ([a-zA-Z]*)/).captures.first
#=> "John"
(given your string is in str)
[] will also give you the first match:
str[/First Name = ([a-zA-Z]*)/, 1]
The 1 means the first capture group
/^First Name = ([a-zA-Z]*)/
this will work too. just add ^ to indicate start of line

Resources