Microsoft offers the Lifecycle Management service, which allows me to setup a rule for an action.
There, I can delete old blobs by setting up an expiration date. However, after deleting all the blobs the container remains there, forever empty.
Is there any configuration that also deletes de container whenever it is x days old and/or empty?
We don’t have container delete as part of life-cycle management now.We are in planning to add in the future.
You can share your feedback or suggestion here. All the feedback you share in these forums will be monitored and reviewed by the Microsoft engineering teams responsible for building Azure.
Related
i updated my application because of this security vulnerability.
Is it possible to set a policy in Azure Storage Blob that only blobs without encryption or with ClientSideEncryptionVersion.V2_0 can be uploaded?
Upload attempts with ClientSideEncryptionVersion.V1_0 should be blocked.
We don't have this feature today. You may leave your feedback here All the feedback you share in these forums will be monitored and reviewed by the Microsoft engineering teams responsible for building Azure.
Client side encryption can be done only using SDK because we use CEK to encrypt the data before uploading, so you have to write your own custom logic for this scenario, Also make sure that all your application are upgraded to latest SDK.
Alternatively, would it be possible to use the Azure Metric Explorer to evaluate the upload attempts with the ClientSideEncryptionVersion?
No. Encryption metadata is stored with the blob, but we don’t have granular tracking for what metadata is set.
Or can you alternatively, for example, give a tag or an application version when uploading that is saved together with the blob?
This is completely dependent on the application and the features you use in storage/the SDK. There isn’t anything automatically enabled by the SDK.
Please let me know if you have any further queries.
Apart from Technology support , what are all the business benefits for oracle web logic server. For example in area of security,support etc.
What are all the new features supported by weblogic ?
TL;DR:
Support is great when you open ticket with Oracle Support (Weblogic strictly).
Great admin/read-only user implementation. We authenticate to Windows Active Directory. Developers get read-only accounts, reduces churn for them to wait for ops to transfer logs and validate settings.
Dashboard useful out-of-box to do real-time monitoring without additional tools or installs. Easily accessed by any one who is authenticated to login. We could give it to our CIO if he wanted in about 3 minutes by adding him to the right authorized group in AD.
Easier to clone environments.
I haven't worked with OC4J but I believe Oracle's roadmap is picking Weblogic as their preferred Java application server. You can see it is the base technology for some of their other products, such as Oracle Service Bus, Oracle Enterprise Manager (OEM), and Oracle Line Planning.
I have opened 3 Oracle tickets in the past month. I was surprised at how fast they answered. For a Severity 3 ticket (medium), they usually have responded in 2-3 days. I can't say the same for their other services (over 2 weeks for a ticket on OEM).
Security is a pretty broad scope... so you'd have to be a little more specific on some of the topics of security.
One thing that is pretty awesome is the Dashboard. http://docs.oracle.com/cd/E14571_01/web.1111/e13714/dashboard.htm You can obviously add read-only monitor accounts so other users can get insight to the performance. We add developers to this so that they can validate any settings, or see performance whenever there is a production issue.
We used Microsoft Active Directory authentication in our Weblogic domains. People are not using the default weblogic administrator user so configuration changes are audited. When someone's account gets disabled when leaving the company, it disables their access to Weblogic similarly. You don't have to change the password.
Other useful settings I like in it is the ability to automatically archive config changes. Each time someone makes a config change, a backup is automatically created. This allows me to go fix something when developers break their environment without having to majorly reverse-engineer what they did.
I also like the fact that you can pack and unpack the domains. I've used it to move entire domains from staging to production with some minor changes... i.e. change all stg to prod variables. This should likewise make it easier to 'clone' environments when you want to build out a new one.
Although not related, I should mention Oracle Enterprise Manager. We are an Oracle shop because they seem to have given us a good deal on licencing. So we get to run Oracle Enterprise Manager, which is a tool slowly becoming more and more useful. The agent also reports how our RedHat Linux hosts are behaving, network input/output, CPU utilization, memory utilization, java heap stacks. We are going to move to defining groups within that has all the targets related to an application stack. This will give our operations team the insight to see where the bottleneck might be... the Oracle Weblogic web layer, network, Oracle Service Bus, or Oracle Database performance.
Supposedly, you can add jBoss, other JMX monitoring as well to OEM. It's on our to-do list for non-Weblogic instance. We're slowly rolling OEM out.
I'm currently building a site that will be hosted in Microsoft Azure. The last site I created in this hosting environment used "Windows Azure Shared Caching". Some of you may already be aware that "Windows Azure Shared Caching" service will soon be deprecated over the next year.
I have applied for the preview release of "Windows Azure Cache". However, I'm finding that my request is still "queued".
I wouldn't mind using "Windows Azure Shared Caching" since the site I'm building will only be live for around 10 weeks and the fact it being deprecated next year doesn't worry me. However, I am unable to create a new caching service through the old Azure Management Portal since new caching has to be done using "Windows Azure Cache".
So my question...
Since my application for the new caching platform is still yet to be approved and I am unable to create a new caching service under the old platform, what other options are there? Have I missed something?
Microsoft is surely making things difficult.
The other option you have is using In-Role Cache for Web/Worker roles (Azure Cloud Services). Any role within the same cloud deployment can access the cache. If you have just 1 web role - this acts very similar to ASP.NET State Server which provides an in-memory cache. However, as you add more web roles - you can choose to distribute this in-memory cache across all roles or use a dedicate worker role for managing the cache.
Dedicated In-Role Cache: worker role uses all available memory
Co-Located In-Role Cache: percentage of available memory is used across all roles
See In-Role Cache FAQ on MSDN for more details.
Your request should have been approved (irrespective of whether yours is a paid/trial/free subscription). If it still hasn't, put the query up here. This is the forum for Cache.
This is a proper release of the Cache Service! The core is very mature and Microsoft is giving great support on top of it. Go ahead and use it!
This flavor of Cache is THE right one for Azure Websites.
Leave a post at the forum for any concerns/issues you have. It is being constantly monitored and replied to.
I guess I am the first to suggest Azure Redis Caching?
It seems to me as a wise idea to test run my workflow on a local server before deploying in at the customer's. To be entirely sure, I'd like to copy all the data from their DB to my test organization (I have full access rights). The problem is that I can't see any straightforward way to export the whole shabang to a XML Spreadsheet.
What's the best way to export/import everything from/to a DB? The source and the target servers are not the same.
Of course I've got the option of backing up the clients DB and restore it, would the brown stuff hit the fan, but it'll far more professional if I won't have to.
The client's DB is in the cloud, which makes me suspect that perhaps I won't be able to access it at all and as far I can see, there's no way to back-up the data there. Am I missing it or is it that bad?
I fully agree that would be sensible. Usually we have a number development and test servers for all our work, generally we do not exactly mirror the data in the client database however.
We create a representative sample of data in our dev servers and then just move across the Crm solution for deployment.
As far as I know there is not straight forward way to get all the data, if you really want to do this I would suggest taking a back up of their database and importing to yours.
(As a side note, not all clients are happy for copies of their database - especially if its a live system - to be taken off site. Personally if it is a live database I wouldn’t put that risk on yourself, if the data gets lost or leaked you might suffer the consequences).
James raises good points about the business aspects of your request, however to get hold of the record-level data there are few options. The easiest by far is a wholesale export and import of the underlying SQL database. (For the record, the alternative is to do a data migration from live into a different db but this is no small task so I won't even entertain that any further here).
You mention that the client is using CRM Online ("...client's DB is in the cloud..."). You can raise a (free) support request with CRM Online Support who will provide you with a copy of the YourOrg_MSCRM database which can then be reimported into an on-premise deployment.
If you wish to simply have a test instance that has a copy of the Microsoft CRM Online organization, Microsoft does provide a means to do that. Depending on how many professional user licenses that the customer has, this may be free, but could be an extra cost and both instances would count against the storage limit for Microsoft CRM Online. You can see full details here - https://community.dynamics.com/crm/b/crmteamblog/archive/2014/03/20/introducing-sandbox-instances-in-crm-online.aspx . You can see steps on how to setup a sandbox instance here - https://technet.microsoft.com/en-us/library/dn467371.aspx "Add an instance to your subscription". This is something that I have used with one of our Microsoft CRM customers as it was a very good way to help validate the Scribe Online migration and customization changes we were making before moving those into production. The nice thing about doing it this way is that everything is still contained in the same Office 365 tenant and you can limit which users have access to the Sandbox organization, which is important for customers in knowing that their data is safe and not on some unknown server or machine.
I've added a setting to ServiceConfiguration.cscfg with the idea that it will allow me to turn on/off a feature of the MVC app. The code correctly reads the setting however while running the app in local dev compute emulator, I don't see the ServiceConfiguration.cscfg file in the .csx directory. I only see the ServiceDefinition.csdef file which has the key but not the value. I want to change the value.
The idea is that I have a text file I can alter after deploying that will allow me to turn on/off parts of the app by opening text file on Azure and making changes.
I don't want to be dependent on Azure Storage or a hop off the Azure box.
What is the best way to change my own app config setting in azure?
Well,
Your path is correct. ServiceConfiguration.cscfg is one of the places where you could have service wide settings. And there is one gotcha here, you can't dynamically change the service configuration with local Azure emulator. If you want to change something in the service configuration, you have to stop your debugging session, change the setting and start new session. Only in live Azure Environment, you can change the service configuration, and it will be propageted to all instances.
I intentionally bolded service wide settings. With full IIS mode (available since SDK 1.3) you can have multiple web sites per single Web Role. That would mean multuple applications. Now I would not want to mess setting for one of the applications, with settings for the other. That is why I would put an application wide settings in an Azure Table. And your application may query this table every N seconds/minutes, depends what is your targeted response time.
I wonder what are your thought begind the "I don't want to be dependend on Azure Storage" statement? Before all, you are developing application for the Windows Azure platform. Ain't you going to have any dynamic data? File uploads or file generation or anything like that? Check out the Windows Azure Storage SLA. I don't think a Windows Azure storage (in your case I suggest Tables) would be in any harm for your application. Especially when your service deployment is in the same geographic region as your storage account.