I'm trying to send content to a campaign template in MailChimp using MailChimp.net API. My template contains `
<b>mc:edit="question"</b>`
And the MailChimp API wants a JSON string in a struct/associative array. However, I can't seem to get the data to show up in the editable section and I've tried several different ways:
CampaignSegmentOptions segmentOptions = new CampaignSegmentOptions();
segmentOptions.Match = "All";
CampaignCreateContent content = new CampaignCreateContent();
content.HTML = "<p>Testing</p>";
CampaignCreateOptions options = new CampaignCreateOptions();
options.Title = "Testing";
options.ListId = listID;
options.ToName = "Test Name";
options.FromEmail = "user#example.com";
options.FromName = "Testing from Example.com";
options.Subject = "Test Subject";
options.FacebookComments = false;
//Using struct attempt
sections s = new sections();
s.question = "What did the lion need?";
//Using string attempt
//String[] s = new string [] {"question\":\"What did the lion need?"};
//Using dictionary attempt
Dictionary<string,string> dict = new Dictionary<string,string>();
dict.Add("question","What did the lion need?");
content.Sections = dict;
Campaign result = mc.CreateCampaign("regular", options, content);
Any suggestions? Thank you in advance.
Solved. In my case the Dictionary approach was the only one that worked. I found that whenever I edited an existing MailChimp template, MailChimp appeared to rewrete my mc:edit fields after I saved them and added their own code making my mc:edit fields break. However, if I exported one of their templates to my desktop, and then imported it back as a new template, I found that editing the template on their website no longer changed my code after saving. I remember seeing a note in the documentation about exporting and importing, but nothing that implied that was the only way to keep my fields intact.
Related
I am able to successfully attach PDF file with ServiceNow table record using GlideSysAttachment API and attachment.write() function in script, however whenever I download and try to open same, I get the error shown in below screenshot.
Code snippet
(function execute() {
try{
var rec = new GlideRecord('incident');
var attachment = new GlideSysAttachment();
var incidentSysID = incident.number;
rec.get(incidentSysID);
var fileName = 'Test_Incident.pdf';
var contentType = 'application/pdf'; // Also tried with contentType as 'text/pdf'
var content = pdf_content;
var agr = attachment.write(rec, fileName, contentType, content);<br>
gs.info('The PDF attachment sys_id is: ' + agr);
}catch(err){
gs.log('Got Error: ' + err);
gs.info(err);
}
})()
I also tried "AttachmentCreator" with ecc_queue within script but same error occurs. Below is code for it.
(function execute()
{var attCreator = new GlideRecord('ecc_queue');
attCreator.agent = "AttachmentCreator";
attCreator.topic = "AttachmentCreator";
attCreator.name = "Test.pdf" + ":" + "text/pdf";
//Also tried, "Test.pdf:application/pdf"
attCreator.source = "incident"+":"+ incident.number;
// Record Table name and sys_id of the particular record
var content = pdf_content; // pdf_content is any string variable
var stringUtil = new GlideStringUtil();
var base64String = stringUtil.base64Encode(content);
var isValid=GlideStringUtil.isBase64(base64String);
var base64String= gs.base64Encode(content);
gs.info("Is valid base64 format in ecc_queue ? "+ isValid);
attCreator.payload = base64String; //base64 content of the file
attCreator.insert();
})()
I am able to attach and view excel and word files with similar scripts without any issues. I have checked system properties for attachments but everything looks fine. I am able to view the PDF file uploaded from UI to particular table records however not the one I attach via REST API or scripts.
I have also tried sending encoded data as bytes, base64 or simple string but nothing seems to work. I don't get any errors and attachment id is returned each time on creation of attachment.
After modifying my code slightly for above functions w.r.t scoped application instead of global; I got some information from logs when I debug:
05:38:38.411 Security restricted: File type is not allowed or does not match the content for file Test.pdf
05:38:38.410 Security restricted: MIME type mismatch for file: Test.pdf. Expected type:application/pdf, Actual type: text/plain
05:38:38.394 App:XYZ App x_272539_xyz_ap: Is valid base64 format in ecc_queue ? true
First a comment: This line in your code is accidentally working -- make sure you understand that a task number is not the object sys_id
var incidentSysID = incident.number; // should be incident.sys_id
Next, it's unclear where the PDF content is coming from. IF your complete code is listed, I would expect the errors given as you have noted that pdf_content is "any string variable."
ServiceNow does have a the capability to create a PDF from an HTML argument.
Generating a PDF from HTML
Here's a helpful blog post for getting a PDF (Platform generated) of an existing record:
Love PDF? PDF loves you too
I need send reminders to users of a web application.
To do this I use iCal.Net from nuget packages. Following instructions about the usage I'm able to send an email with an attachment containing the ics file.
Is it possible to format the event description as html and make it working on both Outlook office client and Chrome calendar?
On these clients I see plain text representation with all the html tags. I tested also on Windows 10 calendar where the event description is correctly displayed as html. Do I have to set something more?
This is my calendar string generation
var now = DateTime.Now;
var later = scadenza.Value;
var e = new Event
{
DtStart = new CalDateTime(later),
DtEnd = new CalDateTime(later),
Description = mailText,
IsAllDay = true,
Created = new CalDateTime(DateTime.Now),
Summary = $"Reminder for XYZ",
};
var attendee = new Attendee
{
CommonName = "…..",
Rsvp = true,
Value = new Uri("mailto:" + ConfigurationManager.AppSettings["ReminderReceiver"])
};
e. Attendees = new List<IAttendee> { attendee };
var calendar = new Calendar();
calendar.Events.Add(e);
var serializer = new CalendarSerializer(new SerializationContext());
var icalString = serializer.SerializeToString(calendar);
HTML formatting isn't part of the icalendar spec, so it's up to applications to add support for this via the non-standard property fields, which are designated with an X- prefix. (Further reading on non-standard properties if you're interested.)
Google calendar
It doesn't look like Google calendar has any support for HTML-formatted events, so I think you're out of luck there.
Outlook
Outlook uses a number of these X- fields for various things; for HTML-formatted descriptions, it uses X-ALT-DESC and specifies FMTTYPE=text/html like this:
X-ALT-DESC;FMTTYPE=text/html:<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//E
N">\n<HTML>\n<HEAD>\n<META NAME="Generator" CONTENT="MS Exchange Server ve
rsion rmj.rmm.rup.rpr">\n<TITLE></TITLE>\n</HEAD>\n<BODY>\n<!-- Converted
from text/rtf format -->\n\n<P DIR=LTR><SPAN LANG="en-us"><FONT FACE="Cali
bri">This is some</FONT></SPAN><SPAN LANG="en-us"><B> <FONT FACE="Calibri"
>HTML</FONT></B></SPAN><SPAN LANG="en-us"><FONT FACE="Calibri"></FONT></SP
AN><SPAN LANG="en-us"><U> <FONT FACE="Calibri">formatted</FONT></U></SPAN>
<SPAN LANG="en-us"><FONT FACE="Calibri"></FONT></SPAN><SPAN LANG="en-us"><
I> <FONT FACE="Calibri">text</FONT></I></SPAN><SPAN LANG="en-us"><FONT FAC
E="Calibri">.</FONT></SPAN><SPAN LANG="en-us"></SPAN></P>\n\n</BODY>\n</HT
ML>
You may be able to get away with much simpler HTML, you'll have to test to see what Outlook supports/allows. The HTML block above was generated using Outlook 2013. I have successfully used that block and run it through a serialization round-trip with ical.net, and Outlook opens it without any fuss.
ical.net
Unfortunately you can't quite do this in ical.net due a bug with CalendarProperty serialization, but the workaround is just one line.
const string formatted = //the text block from above starting with FMTTYPE=text/html:
var start = DateTime.Now;
var end = start.AddHours(1);
var #event = new Event
{
Start = new CalDateTime(start),
End = new CalDateTime(end),
Description = "This is a description",
};
var property = new CalendarProperty("X-ALT-DESC", formatted);
#event.AddProperty(property);
var calendar = new Calendar();
calendar.Events.Add(#event);
var serialized = new CalendarSerializer().SerializeToString(calendar);
//Workaround for the bug:
serialized = serialized.Replace("X-ALT-DESC:FMTTYPE=text/html", "X-ALT-DESC;FMTTYPE=text/html");
This will produce HTML-formatted Outlook calendar events.
(When I fix the bug, I'll update this answer to remove the string.Replace call.)
I am trying to import Attachments/Annotations to CRM Dynamics, I am doing this using the SDK.
I am not using the data import wizard.
I am not individually creating Annotation entities, instead I am using Data Import Feature programmatically.
I mostly leveraged the DataImport sample from the SDK sample code (SDK\SampleCode\CS\DataManagement\DataImport).
Import import = new Import()
{
ModeCode = new OptionSetValue((int)ImportModeCode.Create),
Name = "Data Import"
};
Guid importId = _serviceProxy.Create(import);
_serviceProxy.Create(
new ColumnMapping()
{
ImportMapId = new EntityReference(ImportMap.EntityLogicalName, importMapId),
ProcessCode = new OptionSetValue((int)ColumnMappingProcessCode.Process),
SourceEntityName = sourceEntityName,
SourceAttributeName = sourceAttributeName,
TargetEntityName = targetEntityName,
TargetAttributeName = targetAttributeName
});
I am getting an error "The reference to the attachment could not be found".
The documentation says the crm async service will find the physical file on disk and upload it, my question is where does the async service look for attachment files?
I tried to map documentbody field to the full path of the attachment on the desk, but that still didn't work.
The answer below was provided before the question edits clarifying the use of the import wizard instead of the SDK. The answer below is specific to using the SDK.
When you are attaching files to an Annotation (Note) record in CRM via the SDK, you do use the documentbody attribute (along with mimetype), but you have to first convert it base64.
Something like this:
var myFile = #"C:\Path\To\My\File.pdf";
// Do checks to make sure file exists...
// Convert to Base64.
var base64Data = Convert.ToBase64String(System.IO.File.ReadAllBytes(myFile));
var newNote = new Entity("annotation");
// Set subject, regarding object, etc.
// Add the data required for a file attachment.
newNote.Attributes.Add("documentbody", base64Data);
newNote.Attributes.Add("mimetype", "text/plain"); // This mime type seems to work for all file types.
orgService.Create(newNote);
I found the solution in an obscure blog post, I think the documentation is misleading or unclear, the way this whole thing works, makes having the files available on the server disk for the async to process, odd.
To follow the same principle, all contents should be sent like the csv file itself while being linked to the same import.
To solve this we need create individual special Internal ImportFile for each physical attachment, and link it to the import that has the attachments record details.
As you see below with linking the attachments ImportFile using the ImportId and then setting the two properties (ProcessCode and FileTypeCode), it all worked in the end.
Suffice to say using this method is much more efficient and quicker than individually creating Annotation records.
foreach (var line in File.ReadLines(csvFilesPath + "Attachment.csv").Skip(1))
{
var fileName = line.Split(',')[0].Replace("\"", null);
using (FileStream stream = File.OpenRead(attachmentsPath + fileName))
{
byte[] byteData = new byte[stream.Length];
stream.Read(byteData, 0, byteData.Length);
stream.Close();
string encodedAttachmentData = System.Convert.ToBase64String(byteData);
ImportFile importFileAttachment = new ImportFile()
{
Content = encodedAttachmentData,
Name = fileName,
ImportMapId = new EntityReference(ImportMap.EntityLogicalName, importMapId),
UseSystemMap = true,
ImportId = new EntityReference(Import.EntityLogicalName, importId),
ProcessCode = new OptionSetValue((int)ImportFileProcessCode.Internal),
FileTypeCode = new OptionSetValue((int)ImportFileFileTypeCode.Attachment),
RecordsOwnerId = currentUserRef
};
_serviceProxy.Create(importFileAttachment);
}
idx++;
}
We are reading the adobe form template using ABCpdf , populating form fields with values retrieved from database and amending them into a single PDF document and sending the document back as File stream in the HTTP response to the users in a ASP.net MVC App.
This approach is working fine and PDF documents are getting generated successfully. But when the user choose to open the generated PDF file and try to close it, they are being prompted ‘Do you want to save changes to xxx.pdf before closing’ dialog from Adobe Acrobat. Is there any way of suppressing this message using ABC pdf?.
Following is the code we are using to generate the PDF.
public byte[] GeneratePDF(Employee employee, String TemplatePath)
{
string[] FieldNames;
Doc theDoc;
MemoryStream MSgeneratedPDFFile = new MemoryStream();
//Get the PDF Template and read all the form fields inside the template
theDoc = new Doc();
theDoc.Read(HttpContext.Current.Server.MapPath(TemplatePath));
FieldNames = theDoc.Form.GetFieldNames();
//Navigate through each Form field and populate employee details
foreach (string FieldName in FieldNames)
{
Field theField = theDoc.Form[FieldName];
switch (FieldName)
{
case "Your_First_Name":
theField.Value = employee.FirstName;
break;
default:
theField.Value = theField.Name;
break;
}
//Remove Form Fields and replace them with text
theField.Focus();
theDoc.Color.String = "240 240 255";
theDoc.FillRect();
theDoc.Rect.Height = 12;
theDoc.Color.String = "220 0 0";
theDoc.AddText(theField.Value);
theDoc.Delete(theField.ID);
}
return theDoc.GetData();
}
Today I ran into this problem too, but with a PDF with no form fields. I ran #CharlieNoTomatoes code and confirmed the FieldNames collection was definitely empty.
I stepped through the various stages of my code and found that if I saved the PDF to the file system and opened from there it was fine. Which narrowed it down to the code that took the abcpdf data stream and sent it directly to the user (I normally don't bother actually saving to disk). Found this in the WebSuperGoo docs and it suggested my server might be sending some extra rubbish in the Response causing the file to be corrupted.
Adding Response.End(); did the trick for me. The resulting PDF files no longer displayed the message.
byte[] theData = _thisPdf.Doc.GetData();
var curr = HttpContext.Current;
curr.Response.Clear();
curr.Response.ContentType = "application/pdf";
curr.Response.AddHeader("Content-Disposition", "attachment; filename=blah.pdf");
curr.Response.Charset = "UTF-8";
curr.Response.AddHeader("content-length", theData.Length.ToString());
curr.Response.BinaryWrite(theData);
curr.Response.End();
I ran into this problem, as well. I found a hint about "appearance streams" here:
The PDF contains form fields and the NeedAppearances entry in the interactive form dictionary is set to true. This means that the conforming PDF reader will generate an appearance stream where necessary for form fields in the PDF and as a result the Save button is enabled. If the NeedAppearances entry is set to false then the conforming PDF reader should not generate any new appearance streams. More information about appearance streams in PDF files and how to control them with Debenu Quick PDF Library.
So, I looked for "appearance" things in the websupergoo doc and was able to set some form properties and call a field method to get rid of the "Save changes" message. In the code sample above, it would look like this:
Edit: After exchanging emails with fast-and-helpful WebSuperGoo support about the same message after creating a PDF with AddImageHtml that WASN'T fixed by just setting the form NeedAppearances flag, I added the lines about Catalog and Atom to remove the core document NeedAppearances flag that gets set during AddImageHtml.
public byte[] GeneratePDF(Employee employee, String TemplatePath)
{
string[] FieldNames;
Doc theDoc;
MemoryStream MSgeneratedPDFFile = new MemoryStream();
//Get the PDF Template and read all the form fields inside the template
theDoc = new Doc();
theDoc.Read(HttpContext.Current.Server.MapPath(TemplatePath));
FieldNames = theDoc.Form.GetFieldNames();
//Tell PDF viewer to not create its own appearances
theDoc.Form.NeedAppearances = false;
//Generate appearances when needed
theDoc.Form.GenerateAppearances = true;
//Navigate through each Form field and populate employee details
foreach (string FieldName in FieldNames)
{
Field theField = theDoc.Form[FieldName];
switch (FieldName)
{
case "Your_First_Name":
theField.Value = employee.FirstName;
break;
default:
theField.Value = theField.Name;
break;
}
//Update the appearance for the field
theField.UpdateAppearance();
//Remove Form Fields and replace them with text
theField.Focus();
theDoc.Color.String = "240 240 255";
theDoc.FillRect();
theDoc.Rect.Height = 12;
theDoc.Color.String = "220 0 0";
theDoc.AddText(theField.Value);
theDoc.Delete(theField.ID);
}
Catalog cat = theDoc.ObjectSoup.Catalog;
Atom.RemoveItem(cat.Resolve(Atom.GetItem(cat.Atom, "AcroForm")), "NeedAppearances");
return theDoc.GetData();
}
I found the nsINavBookmarksService, however since Firefox 3, it does not seem to have any API methods for getting/setting the Bookmark description. (API doc)
I've seen other Add-Ons modify the description as a method of synchronized data storage (which is exactly what I'm trying to do). I'm guessing perhaps the description is a non-gecko standard, and that's why it is not directly supported, but then there must be a completely different interface for manipulating Bookmarks that I haven't discovered.
Can anyone help with this newbie problem?
Starting with Firefox 3 the bookmarks have been merged into the Places database containing all of your browsing history. So to get the bookmarks you do a history query, like this (first line is specific to the Add-on SDK which you appear to be using):
var {Cc, Ci} = require("chrome");
var historyService = Cc["#mozilla.org/browser/nav-history-service;1"]
.getService(Ci.nsINavHistoryService);
var options = historyService.getNewQueryOptions();
var query = historyService.getNewQuery();
query.onlyBookmarked = true;
var result = historyService.executeQuery(query, options);
result.root.containerOpen = true;
for (var i = 0; i < result.root.childCount; i++)
{
var node = result.root.getChild(i);
console.log(node.title);
}
That's mostly identical to the code example here. Your problem is of course that nsINavHistoryResultNode has no way of storing data like a description. You can set annotations however, see Using the Places annotation service. So if you already have a node variable for your bookmark:
var annotationName = "my.extension.example.com/bookmarkDescription";
var ioService = Cc["#mozilla.org/network/io-service;1"]
.getService(Ci.nsIIOService);
var uri = ioService.newURI(node.uri, null, null);
var annotationService = Cc["#mozilla.org/browser/annotation-service;1"]
.getService(Ci.nsIAnnotationService);
annotationService.setPageAnnotation(uri, annotationName,
"Some description", 0, Ci.nsIAnnotationService.EXPIRE_NEVER);
For reference: nsIAnnotationService.setPageAnnotation()