I'm writing a code to enumerate the devices in a network using WNetOpenEnum function and WNetEnumResource and the msdn documentation in the link http://msdn.microsoft.com/en-us/library/windows/desktop/aa385478(v=vs.85).aspx mentions the terminology as container resource .I have googled and i didn't get much information on the same , please let me know what exactly container resource means .
Like said on the manual page you linked:
The WNetOpenEnum function is used to begin enumeration of the resources in a single container. The following examples show the hierarchical structure of a Microsoft LAN Manager network and a Novell NetWare network and identify the containers.
LanMan (container, in this case the provider)
ACCOUNTING (container, in this case the domain)
\\ACCTSPAY (container, in this case the server)
PAYFILES (disk)
LASERJET (print)
Here the containers are LanMan, ACCOUNTING and \\ACCTSPAY.
NetWare (container, in this case the provider)
MARKETING (container, in this case the server)
SYS (disk, first one on any NetWare server)
ANOTHERVOLUME (disk)
LASERJET (print)
Here NetWare and MARKETING are containers.
Related
I’m a computer science teacher in a secondary school. The school has a simple network composed of 8 Unifi WiFi AP + 1 controller that supports radius authentication and accounting . Everything is directly connected to a single router (there are also 30 PC connected via eth cable) .
The WiFi network “should be” used exclusively by teachers (around 70) but systematically, some “clever” students, using some sort of social engineering attack, are always able to retrieve the WPA2 WiFi passphrase and access the network. Hence after a couple of week the network is saturated (there are 700 students in school!). For that reason I would like to move to WPA2 enterprise auth.
I’ve installed on an old machine a lubuntu distro with freeradius + MySQL + DaloRadius and everything seems to work properly, at least locally!.
In freeradius I created a group called “teacher” and I associated all the teachers to that group. That group has also the attribute “Simultaneous-Use := 1 “ in the radgroupcheck table , obviously every user/teacher has its own “Cleartext-Password” in the radcheck table.
DESIRED REQUIREMENT: I do not want a bullet proof wifi network but a reliable solution at least for teachers. I can accept the fact that, after an account violation, some students can by able to use the network (eg. 3-4 contemporary sessions), but massive usage of the WiFi network shall be avoided.
Here are my dubts:
I’ve heard that ubiqui unifi hotspot are not so reliable in terms of accounting (sometime the session is not properly closed) so I could face some authentication problem also for the trusted user. According to the REQUIREMENT above, can I tune the freeradius attribute (Simultaneous-Use, Session Timeout, etc.) in order to avoid massive problem for teachers.
Other suggestions? Eg. A schell script in cron to unlock the leftopen session after some time. Lease time tuning on DHCP.
According to a comment on a StackOverflow answer,
In a web or worker role you have to use an Azure Drive - which has
much lower performance than an Azure Disk which you get with a VHD.
Reference: blogs.msdn.com/b/windowsazurestorage/archive/2012/06/28/…
– Matt Johnson Feb 19 at 20:15
However, I've read through this reference link and other related documentation, and I cannot find anything to support the assertion that a PaaS Cloud Drive is slower than an IaaS disk. In fact, the only thing I do see is that drives work on 2 MB chunks, whereas disks work on 128 KB chunks. I would therefore assume that drives would be more performant than disks.
Drives: IO < 2 megabytes will be 1 transaction; IO >= 2 megabytes will be broken into transactions of 2MBs or smaller
Disks: IO < 128 kilobytes will be 1 transaction; IO >= 128 kilobytes will be broken into transactions of 128KBs or smaller
Does anyone have any real world metrics or links to indicate the perf difference between these two options?
The two features are currently implemented differently.
Azure Drive is a filesystem filter that grabs the NTFS calls, converts to REST, and forwards to the Azure Blob backing the disk (Page Blob). Network IO counts against the quota of the VM (each core of a VM gets 100Mb/Sec).
Data Drives are implemented within the Azure Hypervisor and are presented to the Guest OS as a mountable drive. Same basic idea - it converts the calls to the drive to REST and interacts with the Azure Blob backing the drive (still a Page Blob). The network IO for the call to storage does not count against the Guest OS, so you could still have 100Mb/sec/core for 'regular' network traffic while making calls to the data disk.
For both, there are local caching options the impact of which will vary based on the specific workloads & IO patterns.
I would recommend a quick read of the following for more details:
http://blogs.msdn.com/b/windowsazurestorage/archive/2012/11/04/windows-azure-s-flat-network-storage-and-2012-scalability-targets.aspx
http://blogs.msdn.com/b/windowsazurestorage/archive/2012/06/28/exploring-windows-azure-drives-disks-and-images.aspx
There is a huge variance in the launch times of Windows AMIs (EBS-backed) that I am using. Some start up in just 3 minutes. Others can take 20+ minutes. My understanding is that the default Windows AMIs can be slow as they require two reboots to get active, but in my case these are all customized machines, either public or snapshots I have created.
On a similar note, I was retrieving the log files in the EC2 console to know when my machine is started. However, some of the machines DO NOT seem to generate any logs?? So, realistically, I have a variable startup time and variable logging, in which case how can I even really tell that a Windows machine has become avialable?
It does take a varying amount of time to launch a Windows AMI in EC2. You can minimize it, a bit, by setting a fixed machine name for the instance. Do this as you would on any Windows computer - in the properties of "My Computer", "Computer Name" tab. Then, run "EC2ConfigService Settings" from the Start Menu's "All Programs" list. That program is installed there by Amazon on most base AMIs. In that program, on the "General" tab, UnCheck "Set Computer Name". This eliminates the system from re-booting itself once while starting up the image, as it would have to, in order to set the name.
Still, you would like to be notified when your instance is ready! This is a perfect job for Amazon's Simple Notification Service. The service (also known as SNS) is simple to use programmatically (from a windows .NET project for example), free (for the first 100,000 messages, less than 1gb in total), and the notifications are immediate.
Code to send a notification (in VB.NET):
Imports Amazon.EC2.AmazonEC2Client
Imports Amazon.SimpleNotificationService
DIM LabSNS As New AmazonSimpleNotificationServiceClient(Lab_AWSKey, Lab_AWSSecretKey)
Dim PubReq As New Amazon.SimpleNotificationService.Model.PublishRequest
Dim Msg As String 'Messege to be built up, then be sent. It is body of eMail.
Msg = "The instance is running and ready!"
Msg = Msg + vbCrLf + "Previous State of machine was:" & PreviousState 'A made-up global
Msg = Msg + [Any other info. I want to send myself about the start of the instance.]
PubReq.WithTopicArn(Topic)'Topic is a global. It's value is a key from SNS topic setup.
PubReq.WithSubject("EC2 Instance is Ready!")
PubReq.WithMessage(Msg)
LabSNS.Publish(PubReq)
The code requires Amazon's SDK for .NET which is free. Write a program including some code like the above. Set the program to run after the computer starts, and before login, using the Windows Task Scheduler - create a task triggered "at system startup" that calls the program.
The setup for SNS is documented here:SNS Documentation
It looks like a lot of trouble to send eMail, however, Amazon's EC2 environment is highly restrictive when it comes to sending eMail. Many have tried to use EC2 as a spam platform, so Amazon has been thorough in blocking SMTP (eMail) traffic, except as prescribed by Amazon. You can't just open up a port on the Amazon security group to bypass Amazon's blocks.
Amazon does have a general eMail facility one can use from within EC2. It is called, Amazon Simple Email Service (SES). That will not work well for you, as it is designed for bulk eMail. So, SES's pricing, exception handling and messaging won't fit well with what you need, I don't think.
SNS, on the other hand works great for this. It includes an initial eMail to the recipients (you, and perhaps others you may want to notify of your server coming on-line) asking if they want to receive future messages of the topic; they are given an option to opt out, and must reply to receive further.
The setup process (shown in blocks above) is all easily doable from Amazon's AWS Management Console. (Your question implies you that you already have an AWS EC2 account needed for this.) Once setup, any instance launched from the AMI would send out an eMail containing any information (available to your program) of your choosing as soon as the machine is ready.
It'll be gotcha-free in setting up, and solid as a rock in operation.
Regardless of the source of your Windows AMI, it will reboot a number of times during its startup process before it becomes available via RDP. All Windows AMIs are derived from the Windows AMIs produced by Amazon, which have this boot process by design. [It's been suggested that this boot process is hard-coded into a custom kernel that is running inside the guest VM.]
Console logs typically take between 2 and 5 minutes to appear.
Unfortunately, Windows on EC2 is more difficult to automate and track than linux. The RightScale and Scalr folks have done some excellent work integrating Windows into their management platform. And the Opscode Chef configuration Management tool also supports Windows in EC2 and can help you discover when your instances are ready for use.
Just a question about Azure.
Yes, I know roughly about Azure and cloud computing. I will put it in this way:
say, in normal way, I build a program listening to a TCP port. I run this server program in a server. I also build a client program, which connects to the server through specified port. Once a client is connected, my server program will compute some thing and return to the client.
Above is the normal model, or say my program's model.
Now I want to use Azure. I want to use because my clients are too many, let's say 1 million a day. I don't want to rent 1000 servers and maintain them. ( just a assumption for the number of clients)
I have looked at the Azure pricing plan. It say about CPU and talks about small, median, large instances.
I don't know what they mean. for e.g., in my above assumed case, how many instances do I need? or at most I can get from azure for extra large (8 small instances?)
How does Azure scale for my program? If I choose small instance (my server program is very little, just compute some data and return to clients), will Azure scale for me? or Azure just gives me one virture server and let it overload?
Please consider the CPU only, not storage or network traffic.
You choose two things: what size of VM to run (small, medium, large) and how many of those VMs to run. That means you could choose a small VM (single processor) and run 100 "instances" of it (100 VMs), or you could choose a large VM (eight processors on the same server) and run 10 instances of it (10 VMs).
Today, Windows Azure doesn't automatically adjust your scale, so it's up to you to use the web portal or the Service Management API to increase the number of instances as your need increases.
One factor to consider is if your app can take advantage of multi-core environments - multi-thread, shared memory, etc. to improve its scale. If it can, it may be better to use 5 2x core (i.e. medium) VMs than 10 1x core (small) VMs. You may find in some cases that 2 4x core VMs perform better than 5 2core.
If your app is not parallel/multi-core, then you could just do some 'x' number of small VMs. The charges are linear anyway - i.e. a 2core VM is twice the cost of a single core.
Other factors would include the scratch disk size & memory available in the VM.
One other suggestion - you may want to look into leveraging the Azure queues (i.e. have the client post to the queue and the workers pull from there). This would allow you to transparently (to the client) increase/decrease the workers w/out worrying about connections, etc. Also, if a processing step failed and crashed your instance the message would persist and be picked up by one of the others.
I suggest you also monitor, evaluate, and perfect the results of your Azure configuration.
For "Monitoring Applications in Windows Azure" (and performance) please reference
http://channel9.msdn.com/learn/courses/Azure/Deployment/DeployingApplicationsinWindowsAzure/Exercise-3-Monitoring-Applications-in-Windows-Azure/
There is also a good blog entry titled "Visualizing Windows Azure diagnostic data"
Check out http://www.paraleap.com - simple service for automatically adjusting number of instances that you have according to demand.
I've to move a Windows based multi-threaded application (which uses global variables as well as an RDBMS for storage) to an NLB (i.e., network load balancer) cluster. The common architectural issues that immediately come to mind are
Global variables (which are both read/ written) will have to be moved to a shared storage. What are the best practices here? Is there anything available in Windows Clustering API to manage such things?
My application uses sockets, and persistent connections is a norm in the field I work. I believe persistent connections cannot be load balanced. Again, what are the architectural recommendations in this regard?
I'll answer the persistent connection part of the question first since it's easier. All good network load-balancing solutions (including Microsoft's NLB service built into Windows Server, but also including load balancing devices like F5 BigIP) have the ability to "stick" individual connections from clients to particular cluster nodes for the duration of the connection. In Microsoft's NLB this is called "Single Affinity", while other load balancers call it "Sticky Sessions". Sometimes there are caveats (for example, Microsoft's NLB will break connections if a new member is added to the cluster, although a single connection is never moved from one host to another).
re: global variables, they are the bane of load-balanced systems. Most designers of load-balanced apps will do a lot of re-architecture to minimize dependence on shared state since it impedes the scalabilty and availability of a load-balanced application. Most of these approaches come down to a two-step strategy: first, move shared state to a highly-available location, and second, change the app to minimize the number of times that shared state must be accessed.
Most clustered apps I've seen will store shared state (even shared, volatile state like global variables) in an RDBMS. This is mostly out of convenience. You can also use an in-memory database for maximum performance. But the simplicity of using an RDBMS for all shared state (transient and durable), plus the use of existing database tools for high-availability, tends to work out for many services. Perf of an RDBMS is of course orders of magnitude slower than global variables in memory, but if shared state is small you'll be reading out of the RDBMS's cache anyways, and if you're making a network hop to read/write the data the difference is relatively less. You can also make a big difference by optimizing your database schema for fast reading/writing, for example by removing unneeded indexes and using NOLOCK for all read queries where exact, up-to-the-millisecond accuracy is not required.
I'm not saying an RDBMS will always be the best solution for shared state, only that improving shared-state access times are usually not the way that load-balanced apps get their performance-- instead, they get performance by removing the need to synchronously access (and, especially, write to) shared state on every request. That's the second thing I noted above: changing your app to reduce dependence on shared state.
For example, for simple "counters" and similar metrics, apps will often queue up their updates and have a single thread in charge of updating shared state asynchronously from the queue.
For more complex cases, apps may swtich from Pessimistic Concurrency (checking that a resource is available beforehand) to Optimistic Concurrency (assuming it's available, and then backing out the work later if you ended up, for example, selling the same item to two different clients!).
Net-net, in load-balanced situations, brute force solutions often don't work as well as thinking creatively about your dependency on shared state and coming up with inventive ways to prevent having to wait for synchronous reading or writing shared state on every request.
I would not bother with using MSCS (Microsoft Cluster Service) in your scenario. MSCS is a failover solution, meaning it's good at keeping a one-server app highly available even if one of the cluster nodes goes down, but you won't get the scalability and simplicity you'll get from a true load-balanced service. I suspect MSCS does have ways to share state (on a shared disk) but they require setting up an MSCS cluster which involves setting up failover, using a shared disk, and other complexity which isn't appropriate for most load-balanced apps. You're better off using a database or a specialized in-memory solution to store your shared state.
Regarding persistent connection look into the port rules, because port rules determine which tcpip port is handled and how.
MSDN:
When a port rule uses multiple-host
load balancing, one of three client
affinity modes is selected. When no
client affinity mode is selected,
Network Load Balancing load-balances
client traffic from one IP address and
different source ports on
multiple-cluster hosts. This maximizes
the granularity of load balancing and
minimizes response time to clients. To
assist in managing client sessions,
the default single-client affinity
mode load-balances all network traffic
from a given client's IP address on a
single-cluster host. The class C
affinity mode further constrains this
to load-balance all client traffic
from a single class C address space.
In an asp.net app what allows session state to be persistent is when the clients affinity parameter setting is enabled; the NLB directs all TCP connections from one client IP address to the same cluster host. This allows session state to be maintained in host memory;
The client affinity parameter makes sure that a connection would always route on the server it was landed initially; thereby maintaining the application state.
Therefore I believe, same would happen for your windows based multi threaded app, if you utilize the affinity parameter.
Network Load Balancing Best practices
Web Farming with the
Network Load Balancing Service
in Windows Server 2003 might help you give an insight
Concurrency (Check out Apache Cassandra, et al)
Speed of light issues (if going cross-country or international you'll want heavy use of transactions)
Backups and deduplication (Companies like FalconStor or EMC can help here in a distributed system. I wouldn't underestimate the need for consulting here)