what is wrong with this use of goto? - firemonkey

I'm using a goto statement to skip a piece of code (as per documentation), just for testing purposes while I debug a block of code. I'm getting an error 1526, "goto into protected scope". This is totally trivial I know, but I want to know what is wrong with how I'm using the goto code:
#if defined(_PLAT_ANDROID)
_di_JIntent MyIntent;
MyIntent = TJIntent::JavaClass->init(TJIntent::JavaClass->ACTION_VIEW,
TJnet_Uri::JavaClass->parse(StringToJString("http://relayman.org/papers/2009_FDA_paper.pdf")));
TAndroidHelper::Activity->startActivity(MyIntent);
goto Skipit;
Androidapi::Jni::Graphicscontentviewtext::_di_JIntent intent = TJIntent::Create();
intent->setDataAndType(StringToJString("file://" + System::Ioutils::TPath::Combine(System::Ioutils::TPath::GetSharedDownloadsPath(), "sample.pdf")), StringToJString(L"application/pdf"));
intent->setAction(TJIntent::JavaClass->ACTION_VIEW);
intent->setFlags(TJIntent::JavaClass->FLAG_GRANT_READ_URI_PERMISSION);
if (SharedActivity()->getPackageManager()->queryIntentActivities(intent, TJPackageManager::JavaClass->MATCH_DEFAULT_ONLY)->size() > 0) {
SharedActivity()->startActivity(intent);
} else {
ShowMessage("PDF viewer not found");
}
Skipit:
#endif
I'm working in 10.3.2 and building toward Android target.

Related

Problem Generating Html Report Using DbUp during Octopus Deployment

Using Octopus Deploy to deploy a simple API.
The first step of our deployment process is to generate an HTML report with the delta of the scripts run vs the scripts required to run. I used this tutorial to create the step.
The relevant code in my console application is:
var reportLocationSection = appConfiguration.GetSection(previewReportCmdLineFlag);
if (reportLocationSection.Value is not null)
{
// Generate a preview file so Octopus Deploy can generate an artifact for approvals
try
{
var report = reportLocationSection.Value;
var fullReportPath = Path.Combine(report, deltaReportName);
Console.WriteLine($"Generating upgrade report at {fullReportPath}");
upgrader.GenerateUpgradeHtmlReport(fullReportPath);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
return operationError;
}
}
The Powershell which I am using in the script step is:
# Get the extracted path for the package
$packagePath = $OctopusParameters["Octopus.Action.Package[DatabaseUpdater].ExtractedPath"]
$connectionString = $OctopusParameters["Project.Database.ConnectionString"]
$reportPath = $OctopusParameters["Project.HtmlReport.Location"]
Write-Host "Report Path: $($reportPath)"
$exeToRun = "$($packagePath)\DatabaseUpdater.exe"
$generatedReport = "$($reportPath)\UpgradeReport.html"
Write-Host "Generated Report: $($generatedReport)"
if ((test-path $reportPath) -eq $false){
New-Item "Creating new directory..."
} else {
New-Item "Directory already exists."
}
# Run this .NET app, passing in the Connection String and a flag
# which tells the app to create a report, but not update the database
& $exeToRun --connectionString="$($connectionString)" --previewReportPath="$($reportPath)"
New-OctopusArtifact -Path "$($generatedReport)"
The error reported by Octopus is:
'Could not find file 'C:\DeltaReports\Some API\2.9.15-DbUp-Test-9\UpgradeReport.html'.'
I'm guessing that is being thrown when this powershell line is hit: New-OctopusArtifact ...
And that seems to indicate that the report was never created.
I've used a bit of logging to log out certain variables and the values look sound:
Report Path: C:\DeltaReports\Some API\2.9.15-DbUp-Test-9
Generated Report: C:\DeltaReports\Some API\2.9.15-DbUp-Test-9\UpgradeReport.html
Generating upgrade report at C:\DeltaReports\Some API\2.9.15-DbUp-Test-9\UpgradeReport.html
As you can see in the C#, the relevant code is wrapped in a try/catch block, but I'm not sure whether the error is being written out there or at a later point by Octopus (I'd need to do a pull request to add a marker in the code).
Can anyone see a way forward win resolving this? Has anyone else encountered this?
Cheers
I recently redid some of the work from that article for this video up on YouTube. I did run into some issues with the .SQL files not being included in the assembly. I think it was after I upgraded to .NET 6. But that might be a coincidence.
Anyway, because the files weren't being included in the assembly, when I ran the command line app via Octopus, it wouldn't properly generate the file for me. I ended up configuring the project to copy the .SQL files to a folder in the output directory instead of embedding them in the assembly. You can view a sample package here.
One thing that helped me is running the app in a debugger with the same parameters just to make sure it was actually generating the file. I'm sure you already thought of that, but I'd be remiss if I forgot to include it in my answer. :)
FWIW, this is my updated scripts.
First, the Octopus Script:
$packagePath = $OctopusParameters["Octopus.Action.Package[Trident.Database].ExtractedPath"]
$connectionString = $OctopusParameters["Project.Connection.String"]
$environmentName = $OctopusParameters["Octopus.Environment.Name"]
$reportPath = $OctopusParameters["Project.Database.Report.Path"]
cd $packagePath
$appToRun = ".\Octopus.Trident.Database.DbUp"
$generatedReport = "$reportPath\UpgradeReport.html"
& $appToRun --ConnectionString="$connectionString" --PreviewReportPath="$reportPath"
New-OctopusArtifact -Path "$generatedReport" -Name "$environmentName.UpgradeReport.html"
My C# code can be found here but for ease of use, you can see it all here (I'm not proud of how I parse the parameters).
static void Main(string[] args)
{
var connectionString = args.FirstOrDefault(x => x.StartsWith("--ConnectionString", StringComparison.OrdinalIgnoreCase));
connectionString = connectionString.Substring(connectionString.IndexOf("=") + 1).Replace(#"""", string.Empty);
var executingPath = Assembly.GetExecutingAssembly().Location.Replace("Octopus.Trident.Database.DbUp", "").Replace(".dll", "").Replace(".exe", "");
Console.WriteLine($"The execution location is {executingPath}");
var deploymentScriptPath = Path.Combine(executingPath, "DeploymentScripts");
Console.WriteLine($"The deployment script path is located at {deploymentScriptPath}");
var postDeploymentScriptsPath = Path.Combine(executingPath, "PostDeploymentScripts");
Console.WriteLine($"The deployment script path is located at {postDeploymentScriptsPath}");
var upgradeEngineBuilder = DeployChanges.To
.SqlDatabase(connectionString, null)
.WithScriptsFromFileSystem(deploymentScriptPath, new SqlScriptOptions { ScriptType = ScriptType.RunOnce, RunGroupOrder = 1 })
.WithScriptsFromFileSystem(postDeploymentScriptsPath, new SqlScriptOptions { ScriptType = ScriptType.RunAlways, RunGroupOrder = 2 })
.WithTransactionPerScript()
.LogToConsole();
var upgrader = upgradeEngineBuilder.Build();
Console.WriteLine("Is upgrade required: " + upgrader.IsUpgradeRequired());
if (args.Any(a => a.StartsWith("--PreviewReportPath", StringComparison.InvariantCultureIgnoreCase)))
{
// Generate a preview file so Octopus Deploy can generate an artifact for approvals
var report = args.FirstOrDefault(x => x.StartsWith("--PreviewReportPath", StringComparison.OrdinalIgnoreCase));
report = report.Substring(report.IndexOf("=") + 1).Replace(#"""", string.Empty);
if (Directory.Exists(report) == false)
{
Directory.CreateDirectory(report);
}
var fullReportPath = Path.Combine(report, "UpgradeReport.html");
if (File.Exists(fullReportPath) == true)
{
File.Delete(fullReportPath);
}
Console.WriteLine($"Generating the report at {fullReportPath}");
upgrader.GenerateUpgradeHtmlReport(fullReportPath);
}
else
{
var result = upgrader.PerformUpgrade();
// Display the result
if (result.Successful)
{
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine("Success!");
}
else
{
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine(result.Error);
Console.WriteLine("Failed!");
}
}
}
I hope that helps!
After long and detailed investigation, we discovered the answer was quite obvious.
We assumed the existing deploy process configuration was sound. Because we never had a problem with it (until now). As it transpires, there was a problem which led to the Development deployments being deployed twice.
Hence, the errors like the one above and others which talked about file handles being held by another process.
It was actually obvious in hindsight, but we were blind to it as we thought the existing process was sound 😣

SonarQube OpenEdge custom rule to verify &IF preprocessor with Proparse

I am trying to create a custom plugin, based on the Riverside OpenEdge plugin and its version of Proparse to create a rule that valids a &IF preprocessor.
This rule needs to verify if the application is using a deprecated value of a &GLOBAL-DEFINE like this:
/* "A" is the deprecated value of "opts" so I want to create a new ISSUE here */
&IF "{&opts}" = "A" &THEN
MESSAGE "DEPRECATED CODE".
&ENDIF
&IF "{&opts}" > "A" &THEN
MESSAGE "OK CODE".
&ENDIF
For this rule I extended I tried to do something like this:
if (unit.getMacroGraph().macroEventList.stream().noneMatch(macro -> macro instanceof NamedMacroRef
&& ((NamedMacroRef) macro).getMacroDef().getName().equalsIgnoreCase("opts"))) {
return;
}
TokenSource stream = unit.lex();
ProToken tok = (ProToken) stream.nextToken();
while (tok.getNodeType() != ABLNodeType.EOF_ANTLR4) {
if (tok.getNodeType() == ABLNodeType.AMPIF) {
// Verify node.
System.out.println(tok);
}
tok = (ProToken) stream.nextToken();
}
But I don't know if its the best way to verify (I did based on the code from other sources) and it's not working because the next node comes as an empty "QSSTRING". I am very new in the Proparse world, any help is appreciated.
First, you have to know that Proparse doesn't give access to every detail of the preprocessor. That said, the method unit.getMacroGraph() will give you access to the visible part of the preprocessor, so that's a good starting point.
If you're looking for usage of given preprocessor variable, you can search for NamedMacroRef instances pointing to the right MacroDef object (with NamedMacroRef#getMacroDef()#getName()), and the right value.
In a old-style for-each loop:
for (MacroRef ref : unit.getMacroSourceArray()) {
if ((ref instanceof NamedMacroRef)) {
if ("opts".equalsIgnoreCase(((NamedMacroRef) ref).getMacroDef().getName())
&& "A".equalsIgnoreCase(((NamedMacroRef) ref).getMacroDef().getValue())) {
System.out.println("OPTS variable usage with value 'A' at file " + ref.getFileIndex() + ":" + ref.getLine());
}
}
}
On this file:
&global-define opts a
&IF "{&opts}" = "A" &THEN
MESSAGE "DEPRECATED CODE".
&ENDIF
&undefine opts
&global-define opts b
&IF "{&opts}" > "A" &THEN
MESSAGE "OK CODE".
&ENDIF
This gives:
OPTS variable usage with value 'A' at file 0:2
So you don't have access to the expression engine, but I think that the current API is enough for what you want to do.
You can then report the issue with SonarQube with OpenEdgeProparseCheck#reportIssue()

How to define compile time variable on Shared Code Project code

I'm currently working on adding some code to a project (XrmFakeEasy) that uses a Shared Code Project as main code repository .
I want to make changes to the following code paths :
#if FAKE_XRM_EASY_2016 || FAKE_XRM_EASY_365 || FAKE_XRM_EASY_9
// Connect to the CRM web service using a connection string.
CrmServiceClient client = new Microsoft.Xrm.Tooling.Connector.CrmServiceClient(connectionString);
return client;
#else
CrmConnection crmConnection = CrmConnection.Parse(connectionString);
OrganizationService service = new OrganizationService(crmConnection);
return service;
#endif
Currently, the code between #if FAKE_XRM_EASY_2016 || FAKE_XRM_EASY_365 || FAKE_XRM_EASY_9 and #else is greyed, and Intellisense/Debugging won't work on it.
Is there a way to get intellisense on the greyed code or to define the compile time variable on a shared code project ?
Is there a way to get intellisense on the greyed code or to define the
compile time variable on a shared code project ?
As far as I know, it is designed by that and there is no such option to change it.
In such project, the current if condition is not valid(due to false condition) and is in the inactive area, which means that this part of the project will not be executed and therefore cannot obtain its Intellisense.
As a suggestion, you can try to assign a true condition to #if, in your situation, please first use this:
1) use this
#if true
////you can obtain the intellisense for this
#else
.....
#endif
2) then change to this:
#if false
.........
#else
//////add your code here with the related Intellisense
#endif
3) then change to this:
#if FAKE_XRM_EASY_2016 || FAKE_XRM_EASY_365 || FAKE_XRM_EASY_9
.........
#else
........
#endif

Enabling Closed-Display Mode w/o Meeting Apple's Requirements

EDIT:
I have heavily edited this question after making some significant new discoveries and the question not having any answers yet.
Historically/AFAIK, keeping your Mac awake while in closed-display mode and not meeting Apple's requirements, has only been possible with a kernel extension (kext), or a command run as root. Recently however, I have discovered that there must be another way. I could really use some help figuring out how to get this working for use in a (100% free, no IAP) sandboxed Mac App Store (MAS) compatible app.
I have confirmed that some other MAS apps are able to do this, and it looks like they might be writing YES to a key named clamshellSleepDisabled. Or perhaps there's some other trickery involved that causes the key value to be set to YES? I found the function in IOPMrootDomain.cpp:
void IOPMrootDomain::setDisableClamShellSleep( bool val )
{
if (gIOPMWorkLoop->inGate() == false) {
gIOPMWorkLoop->runAction(
OSMemberFunctionCast(IOWorkLoop::Action, this, &IOPMrootDomain::setDisableClamShellSleep),
(OSObject *)this,
(void *)val);
return;
}
else {
DLOG("setDisableClamShellSleep(%x)\n", (uint32_t) val);
if ( clamshellSleepDisabled != val )
{
clamshellSleepDisabled = val;
// If clamshellSleepDisabled is reset to 0, reevaluate if
// system need to go to sleep due to clamshell state
if ( !clamshellSleepDisabled && clamshellClosed)
handlePowerNotification(kLocalEvalClamshellCommand);
}
}
}
I'd like to give this a try and see if that's all it takes, but I don't really have any idea about how to go about calling this function. It's certainly not a part of the IOPMrootDomain documentation, and I can't seem to find any helpful example code for functions that are in the IOPMrootDomain documentation, such as setAggressiveness or setPMAssertionLevel. Here's some evidence of what's going on behind the scenes according to Console:
I've had a tiny bit of experience working with IOMProotDomain via adapting some of ControlPlane's source for another project, but I'm at a loss for how to get started on this. Any help would be greatly appreciated. Thank you!
EDIT:
With #pmdj's contribution/answer, this has been solved!
Full example project:
https://github.com/x74353/CDMManager
This ended up being surprisingly simple/straightforward:
1. Import header:
#import <IOKit/pwr_mgt/IOPMLib.h>
2. Add this function in your implementation file:
IOReturn RootDomain_SetDisableClamShellSleep (io_connect_t root_domain_connection, bool disable)
{
uint32_t num_outputs = 0;
uint32_t input_count = 1;
uint64_t input[input_count];
input[0] = (uint64_t) { disable ? 1 : 0 };
return IOConnectCallScalarMethod(root_domain_connection, kPMSetClamshellSleepState, input, input_count, NULL, &num_outputs);
}
3. Use the following to call the above function from somewhere else in your implementation:
io_connect_t connection = IO_OBJECT_NULL;
io_service_t pmRootDomain = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("IOPMrootDomain"));
IOServiceOpen (pmRootDomain, current_task(), 0, &connection);
// 'enable' is a bool you should assign a YES or NO value to prior to making this call
RootDomain_SetDisableClamShellSleep(connection, enable);
IOServiceClose(connection);
I have no personal experience with the PM root domain, but I do have extensive experience with IOKit, so here goes:
You want IOPMrootDomain::setDisableClamShellSleep() to be called.
A code search for sites calling setDisableClamShellSleep() quickly reveals a location in RootDomainUserClient::externalMethod(), in the file iokit/Kernel/RootDomainUserClient.cpp. This is certainly promising, as externalMethod() is what gets called in response to user space programs calling the IOConnectCall*() family of functions.
Let's dig in:
IOReturn RootDomainUserClient::externalMethod(
uint32_t selector,
IOExternalMethodArguments * arguments,
IOExternalMethodDispatch * dispatch __unused,
OSObject * target __unused,
void * reference __unused )
{
IOReturn ret = kIOReturnBadArgument;
switch (selector)
{
…
…
…
case kPMSetClamshellSleepState:
fOwner->setDisableClamShellSleep(arguments->scalarInput[0] ? true : false);
ret = kIOReturnSuccess;
break;
…
So, to invoke setDisableClamShellSleep() you'll need to:
Open a user client connection to IOPMrootDomain. This looks straightforward, because:
Upon inspection, IOPMrootDomain has an IOUserClientClass property of RootDomainUserClient, so IOServiceOpen() from user space will by default create an RootDomainUserClient instance.
IOPMrootDomain does not override the newUserClient member function, so there are no access controls there.
RootDomainUserClient::initWithTask() does not appear to place any restrictions (e.g. root user, code signing) on the connecting user space process.
So it should simply be a case of running this code in your program:
io_connect_t connection = IO_OBJECT_NULL;
IOReturn ret = IOServiceOpen(
root_domain_service,
current_task(),
0, // user client type, ignored
&connection);
Call the appropriate external method.
From the code excerpt earlier on, we know that the selector must be kPMSetClamshellSleepState.
arguments->scalarInput[0] being zero will call setDisableClamShellSleep(false), while a nonzero value will call setDisableClamShellSleep(true).
This amounts to:
IOReturn RootDomain_SetDisableClamShellSleep(io_connect_t root_domain_connection, bool disable)
{
uint32_t num_outputs = 0;
uint64_t inputs[] = { disable ? 1 : 0 };
return IOConnectCallScalarMethod(
root_domain_connection, kPMSetClamshellSleepState,
&inputs, 1, // 1 = length of array 'inputs'
NULL, &num_outputs);
}
When you're done with your io_connect_t handle, don't forget to IOServiceClose() it.
This should let you toggle clamshell sleep on or off. Note that there does not appear to be any provision for automatically resetting the value to its original state, so if your program crashes or exits without cleaning up after itself, whatever state was last set will remain. This might not be great from a user experience perspective, so perhaps try to defend against it somehow, for example in a crash handler.

C# Single-Assembly Multilanguage App - Resources not being loaded properly

I´m developing a simple C# .NET 4.0 application, and want to have it localized in several languages. However, the satellite assemblies created for localization (i.e. the de/MyApp.resources.dll) would trash its simplicity by having to drag around those dlls and their folders.
That´s why I looked for a means to include those dll into the main (and only) assembly, so only the executable needed to be sent to the end user. I came across this very promising question and gave it a shot.
After adopting the class in the suggested solution, I replaced all occurences I could find of ResourceManager with SingleAssemblyResourceManager in the .Designer.cs files using FART in a pre-build command:
fart.exe "$(ProjectDir)*.Designer.cs "System.ComponentModel.ComponentResourceManager" "SingleAssemblyComponentResourceManager"
Then I created a batch file like so:
"%ProgramFiles%\ILRepack.exe" /t:exe /out:%1TempProg.exe %1%2.exe %1es\%2.resources.dll
IF %ERRORLEVEL% NEQ 0 GOTO END
"%ProgramFiles%\ILRepack.exe" /t:exe /out:%1TempProg2.exe %1TempProg.exe %1de\%2.resources.dll
IF %ERRORLEVEL% NEQ 0 GOTO END
"%ProgramFiles%\ILRepack.exe" /t:exe /out:%1SA_%2.exe %1TempProg2.exe %1tr\%2.resources.dll
IF %ERRORLEVEL% NEQ 0 GOTO END
del %1%2.exe
del %1%2.pdb
del %1TempProg.exe
del %1TempProg.pdb
del %1TempProg2.exe
del %1TempProg2.pdb
rmdir %1es /S /Q
rmdir %1de /S /Q
rmdir %1tr /S /Q
:END
And called it from a post-build command:
$(ProjectDir)postbuild.bat $(TargetDir) $(TargetName)
Note: TargetName and ProjectName are the same in this case.
Built it, successfully, but it´s not working as expected... The form should be displayed in the InstalledUICulture language (if available). To accomplish this, I added this line before InitializeComponent():
Thread.CurrentThread.CurrentUICulture = CultureInfo.InstalledUICulture;
Which did the trick in the "standard" version of the program. Not anymore. However! I also added a little control to change the language at runtime, via a ComboBox. Code is as follows:
private void comboBox1_SelectedIndexChanged(object sender, EventArgs e)
{
if (comboBox1.SelectedItem.ToString() == "English (Default)")
{
Thread.CurrentThread.CurrentUICulture = new CultureInfo("en");
ChangeLanguage("en");
}
else if (comboBox1.SelectedItem.ToString() == "Español")
{
Thread.CurrentThread.CurrentUICulture = new CultureInfo("es");
ChangeLanguage("es");
}
else if (comboBox1.SelectedItem.ToString() == "Deutsch")
{
Thread.CurrentThread.CurrentUICulture = new CultureInfo("de");
ChangeLanguage("de");
}
else if (comboBox1.SelectedItem.ToString() == "Turkce")
{
Thread.CurrentThread.CurrentUICulture = new CultureInfo("tr");
ChangeLanguage("tr");
}
}
private void ChangeLanguage(string lang)
{
foreach (Control c in this.Controls)
{
SingleAssemblyComponentResourceManager resources = new SingleAssemblyComponentResourceManager(typeof(Form1));
resources.ApplyResources(c, c.Name, new CultureInfo(lang));
if (c.ToString().StartsWith("System.Windows.Forms.GroupBox"))
{
foreach (Control child in c.Controls)
{
SingleAssemblyComponentResourceManager resources_child = new SingleAssemblyComponentResourceManager(typeof(Form1));
resources_child.ApplyResources(child, child.Name, new CultureInfo(lang));
}
}
}
}
And this does change the form language. So the dlls are actually included in the exe. Why then, does InitializeComponent not load the appropriate resources? I checked the Designer code and the ResourceManager had been replaced by SingleAssemblyResourceManager.
Also, other than the form button´s texts I have a strings.resx file per language, for MessageBoxes and whatnot, and that doesn´t seem to work either way. But that might be another question.
I am aware that the original solution was designed for a NET 2.0 environment, and that the ResourceSets are obsolete, but it is my understanding that it should work, even if its not recommended.
Any pointers as to where I should look into would be awesome.
As it turns out, I was eventually able to make it work, modifying slightly the CurrentUIculture line. It would seem the code does not try parental cultures (properly) because if I set it to the generic culture (that is, "de" instead of "de-DE") it works perfectly.
Thread.CurrentThread.CurrentUICulture = new CultureInfo(CultureInfo.InstalledUICulture.TwoLetterISOLanguageName);
I discovered this since it was the only evident difference between the ApplyResources calls from InitializeComponent() and ChangeLanguage().
Now, I do not why this is and there certainly may be a better solution out there, but it´s the only fix I found so far.
The strings part still doesnt work though :/

Resources