Note that this is for Mac OS X, although I imagine my problem would exist on any dtrace-capable OS.
I have an app that utilizes a lot of plugins. I'm adding userland probes to it, in both the core application and in the plugins themselves. The issue is that if I use the same provider name in the plugins that the main app is using, those probes aren't showing up when I attempt to create a list of available probes. It appears that whoever's code that loads first wins.
my .d file in my main app:
provider MyApp {
probe doSomething();
};
and in my plugin:
provider MyApp {
probe plugin_doSomethingPluginish();
};
Changing the name of the provider to something else, like MyAppPlugin, works, but then the list of providers is going to get insane (MyAppPlugin1, MyAppPlugin2, etc). I'd like to think that there's a way to add in new plugin-defined probes under the same provider name as the main app, but I'm either not seeing it or it doesn't exist.
So is there a way to do this? And if not, is it normal to have a different provider for each plugin even though the module name is already unique? Seems like that's what the module name is for...
You should just be defining one provider.d file and then importing the .h file into each class using any of those probes, there is really no reason to do multiple .d files each listing the same provider. I just checked in the DTrace documentation about this and don't see anything about it right off the bat, but yeah I would presume that multiple .d files each defining the same provider creates some sort of conflict or that loading probe listing for the same provider is like redefining the probe listing and not extending it as you probably intended.
Related
I have some code that uses the Python Azure SDK to deploy a virtual machine within a resource group. I manually provision each resource in order (a vnet and subnet if necessary, a public IP address, a NIC, and finally the VM itself).
Now, when I want to delete the VM, I can query the list of resources within the resource group and filter that list in my code to match only those resources which have a tag with the matching value.
The problem is that you can't just arbitrarily delete resources that have dependencies. For example, I cannot delete the NIC because it is in use by the virtual machine; I can't delete the OS disk because it's also in use by the VM; I can't delete the public IP address because it's assigned to the NIC; etc.
In the Azure portal you can check off a list of resources and ask the portal to delete all of them, and it handles any resource inter-dependencies for you, but it looks like this is not possible from the SDK.
Right now my only solution is to be fully aware of the path of resource creation and dependency within my code itself. I have to work backwards - first, search the list for VMs with the right tag, delete them, then search for disks with the tag, delete them, NICs, and so on down the line. But this has a lot of room for error and is not in any way reusable for other types of resources.
The only other alternative I can think of is "try to delete it and handle errors" but there's a lot of ugly edge cases I could see happening here and I'd rather take a less haphazard way of handling this, especially since we're deleting things.
TL;dr: Is there a proper way to take a list of resources and query Azure to determine which other resources depend on them? (This could be done one resource at a time but it would still be best to have it be "generic" - i.e. able to do this for any resource without necessarily knowing that resource's type up front).
The resource group contains other resources as well which are related to the same project (e.g. other VMs, a storage account, etc.) so deleting an entire resource group is NOT an option.
One of the workarounds that you can try is using Azure Powershell and tags. Try adding the tags to the resources that you wanted to delete and then use the below command to delete the resources in bulk.
$resources = az resource list --tag Key=Value| ConvertFrom-Json
foreach ($resource in $resources) {
az resource delete --resource-group $resource.resourceGroup --ids $resource.id --verbose
}
This will delete the resources regardless the location or the resource group where it has been created.
I want to create an instance of org.gradle.api.file.Directory. From the Gradle docs I see that the only way to do this is to create the first instance using project.getLayout().getProjectDirectory() and then use the instance method dir(<path>) on this instance to create an instance for another directory.
Is there a way to directly create an instance of Directory class (like using a File object or directly using a string path)?
When I started working with the new Gradle Lazy Configuration API, I encountered the same problem. Even though dir(<path>) allows absolute paths and you may therefore construct a Directory instance for any directory, it appeared to me like bad design.
However, actually it is a pretty consistent design, because using the old API we just called Project.file(...) on our paths, which evaluated relative paths based on the project directory, too. Constructing File instances directly using its constructor was always a bad idea using Gradle. We may now just replace the calls to file(<path>) with calls to layout.projectDirectory.dir(<path>) or .file(<path) and get the same behavior.
Starting from version 6.0, Gradle also provides a method dir(Provider<File>) via its ProjectLayout class. You may construct the Provider<File> using the method provider(Callable<>) of the Project class, but whether this is actually useful depends on your specific use case.
I want to place files a.idl, b.idl in the folder at the link https://github.com/RedhawkSDR/framework-core/tree/master/src/idl/ossie/CF
And I also include a.idl and b.idl in the makefile at this link
https://github.com/RedhawkSDR/framework-core/tree/master/src/idl
As is done for all other idl file mentioned above.
But these are not compiled as I am not able to see them anywhere.
Please provide any inputs
In addition to including, a.idl and b.idl in the file "Makefile.am" at this link https://github.com/RedhawkSDR/framework-core/tree/master/src/idl , we have to do the following in the "Makefile.am" present at the link https://github.com/RedhawkSDR/framework-core/tree/master/src/base/framework/idl
Add aSK.cpp, aDynSK.cpp, bSK.cpp, bDynSK.cpp to the "BUILT_SOURCES" variable defined in the file.
With this done, now we can see the skeleton and stub codes in the folders at following links :
https://github.com/RedhawkSDR/framework-core/tree/master/src/base/framework/idl
and
this folder which will be generated on running the install command "RedhawkSDR/framework-core/tree/master/src/base/include/ossie/CF/"
REDHAWK's IDL is split into two main categories: core services and ports. Core services are related to REDHAWK's core functionality, like deploying an application. Ports are application-specific interfaces for communicating between different processing stages (components or devices). Core services are not meant to be extended, while ports are meant to be extended by the user beyond those already provided (see https://redhawksdr.github.io/2.2.4/manual/connections/)
New IDL can be added to a REDHAWK instance by creating custom IDL interfaces (https://redhawksdr.github.io/2.2.4/manual/connections/custom-idl-interfaces/)
Is it possible to register external methods for Zope using a configure.zcml file or something similar? I'm trying to register external Python scripts (similar to other registry items such as "jsregistry.xml" or "cssregistry.xml" in themes)
No. External Methods are "old tech", pre-dating the Zope Component Architecture by several years.
You can easily add a GenericSetup import step that creates ExternalMethod objects on demand, but since only python modules located in the Extensions directories (inside Products and the INSTANCE_HOME location, you may as well just enumerate those locations via the usual Python file-access methods, add everything you find there and not use a registry at all.
However, are you absolutely certain you want to use an ExternalMethod? Wouldn't a simple utility or view be easier?
On the project I am working on, there are some proxy items that were added at some point from source location A to location B. However right now is not possible to check the source of the proxy and the proxy folder in B does not show anything that suggests that it's a proxy, besides the visual cue that it's grayed out.
When I analysed this article, I looked into the web.config and found this:
<proxiesEnabled>false</proxiesEnabled>
<publishVirtualItems>true</publishVirtualItems>
This seems to suggest that when the proxies were published they were published as regular items, losing any connection to their source, so since I want to recreate the proxies, due to some weird issues related to layout on the standard values item on the template not propagating correctly to the proxied items, I wanted to try to rename the old proxy folder and create a new one, however when I wanted to rename I got a modal alert with this message:
"This item occurs in other locations. If you rename it, the item will be renamed in the other locations as well. Are you sure you want to rename 'MyFoo'?"
Does this means the item still is attached to the source?
I am using Sitecore 6.2.0 (rev. 100701)
I suppose that the settings you mentioned are for master database. Now if you take a closer look at the article you reference, it lists 2 valid cases of proxies setup:
when web database also relies on proxies
when web database contains regular items only which came from publishing
These both cases assume that master database has proxiesEnabled='true'. Look, it doesn't have any sense otherwise - if proxies are disabled, the rest of the mechanism doesn't work, there are no virtual items.
And I can see proxiesEnabled='false' in the example you mentioned.
I'm not sure about the message you get. But if I need to change the proxy definition, I would do the following:
make sure proxiesEnabled='false' for web database (I guess this is your intention)
disable proxies for master database and change the proxies definition the way you want
set publishVirtualItems to true for master database
turn the proxies on for master database
make sure virtual items are in place and publish the site
Try this on some test environment and experiment to get the behavior you'd like - playing with the live site is a bad karma :)