I currently have a problem, that some packages are getting dropped in my local network.
But just on one device.
Ping to local router
Here you can see a ping to my router. I only have this problem on my pc. Mobilephone and Laptops are completly fine.
I tried a network card and two Wlan usb sticks all with the same problem.
Does somebody a clue on what could cause these problems?
*OS: Windows 10 21H2
*CPU usage ideling around 4-10%
*RAM usage 40%
*Network usage 0-1%
Your question is a bit broad - there are so many things that can disturb a network connection, from physical issues (i.e. cable defects, WIFI interference), to driver problems, CPU bottlenecks, etc. That being said, I would tip at a CPU bottleneck (app using most or all of your CPU), but even that is by no means certain.
Take a look at you CPU usage with TaskManager or ProcessExplorer (from the Sysinternals package). They both also show network usage. If your machine shows excessive CPU (constantly over 30% with frequent peeks), then you might want to explorer the reasons for that, and there can be many.
Using those same tools you can also try to identify apps that are possibly using alot of network bandwidth.
Windows has much happening in the background and those processes require resources (CPU, RAM, Network, Harddisk, etc.). Should any of those resources be limited then, you can easily see issues as you describe as there is a certain interdependence between those resources, i.e. you have many apps running with limited RAM, that leads to paging and as the hard disk is slow, the CPU is then busy with data shoveling and can't keep up with the NIC requests.
However, I am theorizing here. Supply some hard data (machine config, OS Info, Network info/config, task list, CPU usage, etc.) and we can continue.
I'm looking to replace a hardware mixer with software, to increase the flexibility of our system and reduce hardware complexity
We have 4-8 server-class PCs (Windows 7) connected in a local LAN via gigabit Ethernet.
Each PC has a USB sound card which is connected to an 8-input mixer.
The output from the mixer is sent to speakers in a few places.
What I'd like is change is:
route the sound over the network instead
each computer should can thus listen to all others and output it's own "mix"
less cables
If possible, support for really cheap hardware (i.e. raspberry pi or something)
There is no hard requirement on latency and such. Up to 100 ms is acceptable (i.e. way higher than your average quake ping...).
While I prefer open source, I'm also open to redistributable commercial solutions, for this to be economically viable, the license costs can't exceed 30-40 €/server. (Preferrably less..)
Grateful for all help!
(Please also share you experiences if possible, not just post links..)
Related question:
https://stackoverflow.com/questions/4297102/how-to-create-a-virtual-sound-card-on-windows <- doesn't seem to interconnect over network
i've got a few old mobos and i was wondering whether it might be possible to create a pair of pci header cards with interconnect wires and write some software to drive the interconnect cards to allow one of the mobos to access the cpu and ram on the other? i'm sure it would be an arduous undertaking involving writing a device driver for the header boards and then writing an application to make use of the interconnect; perhaps a simple demo demonstrating a thread running on each processor and use of both sets of ram, perhaps creating a mini virtual machine that maps 2x3gb ram on 32 bit mobos to a single 6 gb address space. a microcontroller may be needed on each pci header card to act as a translator.
given that mobos almost always have multiple pci slots, i wonder if these interconnected card pairs could be used to daisychain mobos in a sort of high speed beowulf cluster.
i would use debian for each mobo and probably just an atmega128 for each card with a couple of ribbon cables for interconnecting.
pci is basically just an io bus, so i don't see why this shouldn't be possible (but it would be pretty hard going).
does anyone have any advice or has this sort of thing been done before?
Update:
Thanks Martin. What you say makes sense, and it would also seem that if it were possible that it would have already been done before.
Instead, would it be possible to indirectly control the slave cpu by booting it using a "pretend" bootable storage device (hard disk, usb stick, etc)? As long as the slave mobo thinks its being operated by an operating system on a real device it should work.
This could potentially extend to any interface (sata, ide, usb etc); if you connected two pcs together with a sata/ide/usb cable (plug one end of an ide ribbon into one mobo and the other into another mobo), that would be all the hardware you need. the key is in creating a new driver for that interface on the master pc, so instead of the master pc treating that interface as having a storage device on it, it would be driven as a dummy bootable hard disk for the slave computer. this would still be a pretty difficult job for me because i've never done device drivers before, but at least i wouldn't need a soldering iron (which would be much further beyond me). i might be able to take an open source ide driver for linux, study it, and then butcher it to create something that kindof acts in reverse (instead of getting data off it, an application puts data onto it for the slave machine to access like a hard disk). i could then take a basic linux kernel and try booting the slave computer from an application on the master computer (via the butchered master pc ide/sata/usb device driver). for safety, i would probably try to isolate my customised driver as much as possible by targeting an interface not being used for anything else on the master pc (the master pc might use all sata hard disks with the ide bus normally unused, so if i created a custom ide driver it might cause less problems with the host system - because it is sata driven).
Does anyone know if anything like this (faking a bootable hard disk from another pc) has ever been tried before? It would make a pretty cool hackaday on youtube, but also seriously it could add a new dimension to parallel computing if it proved promising.
The PCI bus can't take over the other CPU.
You could make an interconnect that can transfer data from a program on one machine to another. An ethernet card is the most common implementation but for high performance clusters there are faster direct connections like infiband.
Unfortunately PCI is more difficult to build cards for than the old ISA bus, you need surface mount controller chips and specific track layouts to match the impedance requirements of PCI.
Going faster than a few megabit/s involves understanding things like transmission lines and the characteristics of the connection cable.
would use debian for each mobo and probably just an atmega128 for each card with a couple of ribbon cables for interconnecting.
pci is basically just an io bus, so i don't see why this shouldn't be possible (but it would be pretty hard going).
LOL. PCI is an 32-Bit 33MHz Bus at minimum. So simply out of reach for an ATMEGA.
But your idea of:
a pair of pci header cards with interconnect wires and write some software to drive the interconnect cards to allow one of the mobos to access the cpu and ram on the other [...]
This is cheaply possible with just a pair of PCI Firewire (IEEE 1394) cards (and a Firewire cable). There is even a linux driver that allows remote debugging over firewire.
Recently the buzz of virtualization has reached my workplace where developers trying out virtual machines on their computers. Earlier I've been hearing from several different developers about setting up virtual machine in their desktop computers for sake of keeping their development environments clean.
There are plenty of Virtual Machine software products in the market:
Microsoft Virtual PC
Sun VirtualBox
VMWare Workstation or Player
Parallell Inc's Parallells Desktop
I'm interested to know how you use virtualization effectively in your work. My question is how do you use Virtual Machines for day-to-day development and for what reason?
I just built a real beefy machine at home so that I could run multiple VMs at once. My case is probably extreme though, but here is my logic for doing so.
Testing
When I test, particularly a desktop app, I typically create multiple VMs, one for each platform that my software should run on (Windows 2000/XP/Vista etc). If 32 and 64 bit flavors are available, I also build one of each. I also play with the VM hardware settings (e.g. lots of RAM, little RAM, 1 core, 2 core, etc). I found plenty of little bugs this way, that definitely would have made it into the wild had I not used this approach.
This approach also makes it easy to play with different software scenarios (what happens if the user installing the program doesn't have .NET 3.5 sp1? What happens if he doesn't have XXX component? etc?
Development
When I develop, I have one VM running my database servers (SQL2000/2005/2008). This is for two reasons. First, it is more realistic. In a production environment your app is probably not running on the same box as the db. Why not replicate it when you develop? Also, when I'm not developing (remember this is also my home machine), do I really need to have all of those database services running? Yes, I could turn them on and off manually, but its so much easier to switch a VM on.
Clients
If I want to show a client some web work I've done, I can put just a single VM into the DMZ and he can log into the VM and play with the web project, while the rest of my network/computer is safe.
Compatibility
Vista64 is now my main machine. Any older hardware/software I own will not play nicely with that OS. My solutions is to have Windows XP 32 as a VM for all of those items.
Here's something that hasn't been mentioned yet.
Whenever a project enter maintenance mode (aka abandonded), I create a VM with all the tools , libraries, and source code necessary to build the project. That way if I have to come back to it a year later, I won't bet bit in the ass by any upgraded tools or libraries on my workstation.
When I started at my current company, most support/dev/PM staff would run Virtual PC with 1-3 VMs on their desktop for testing.
After a few months I put together a proposal and now we use a VMware ESXi server running a pool of virtual machines (all on 24/7) with different environments for our support staff to test customer problems and reproduce issues on. We have VMs of Windows 2000/XP/Vista with each of Office 2000/2002/2003/2007 installed (so that's 12 VMs) plus some more general test VMs, some Server 2003/2008 machines running Citrix, Terminal Services, etc. Basically most of the time when we hit a new customer configuration that we need to debug, and it's likely other customers also have that configuration, I'll setup a VM for it. (eg. We're only using three 64-bit VMs at the moment - mostly it's 32bit)
On top of that the same server runs a XP VM that I use for building installers (InstallShield, WiX) debugging (VS 2005) and localization (Lingobit) as well as a second VM that our developers use for automated testing (TestComplete).
The development and installer VM have been allocated higher priority and are both configured as dual-cpu VMs with 1Gb memory. The remaining VMs have equal priority and 256-1Gb RAM.
Everything runs on a dual-quad-core Xeon with 8Gb of RAM running ESXi and hardware raid (4x1Tb RAID10)
For little more than US$2.5k investment we've improved productivity 10 fold (imagine the downtime while a support lackie installs an older version of office on their desktop to replicate a customer problem, or the time that I can't use my desktop because we're building installers). Next step will be to double the RAM to 16Gb as we add more memory hungry Server 2008 and Vista VMs.
We still have the odd VM on our desktops (I've got localized versions of Windows, Ubuntu and Windows 7 running under VMware Workstation for example) but the commonly/heavily used configurations have been offloaded to a dedicated server that we can all remotely connect into. Much, much easier.
Virtualisation (with snapshots or non-persistent disks) is really useful for testing software installation in a known clean configuration (i.e. nothing left over from previous buggy installs of your software).
Having your development box on a single file (with a Virtual Machine) will make it much easier to backup and restore if an issue occurs.
Other than that, you can also carry your portable development box around different machines, since you aren't restricted on that single particular machine you usually work on.
Not only that, but you can test on different Operating Systems at once, with a single OS installed on a each Virtual Machine file you have.
Believe me, this will save quite a hassle when doing the jobs I mentioned above.
Another nice use case for VMs is to create a virtual network of machines. For example you can bring up machines running the different tiers of your application stack, each running in its own VM. Think of it as a poor man's datacentre.
These VMs can also appear available on your physical network, so you can use RDP or similar to get a remote terminal session with them.
You can have a beefy machine (lots of memory) running these VMs, while you access them remotely from another machine such as a laptop, or whichever machine you have with the best screen.
I use a VM under Windows to run Linux. Even though there's already a version of emacs for windows, using it in Linux just feels more gratifying for some reason.
Maintaining shelved computers
I have the situation where schools in my region are closed down but their finance system has to be maintained for up to 2 years to ensure all outstanding bills are paid. This used to be handled by maintaining the hardware from the mothballed schools which had some problems:
This wasted scarce hardware resources and took up a lot of physical space.
Finance officers had to be physically present at the hardware to work on each system.
Today I host each mothballed school on its own virtual box inside a single physical host. Each individual system is accessed by rdp on the IP number of the host, but with its own port number and the original security of each school is maintained.
Finance officers can now work on the mothballed schools without having to travel to where they are physically located, there is more physical space in the server room and backup of all the mothballed schools at once is a simple automated process.
With each mothballed school in its own vbox there is no way for cross contamination of data between systems. Many thousands of dollars worth of hardware is also freed up for redeployment.
Virtualisation appears to be the perfect solution to this problem.
I used the Virtualization approach using VMWare Server when the task in front of me was to test a clustered environment of WebSphere Application Server. After setting up VMWare Server i created a new virtual machine and did all the software installations that i would need like WebSphere App Server, Oracle, WebSphere Commerce etc, after which i shutdown the VM, and copied over the virtual hard disk image to two different files, one as a clone VM and another as a backup.
Created a new VM and assigned the one of the copied disk images, so i got two systems up and running now which allowed me to test the same scenario of a clustered environment. I took a snapshot of the VM through VMware and if i goofed up with any activities i would revert the changes to the snapshot taken thereby going to the previous state and increasing my productivity instead of having to find out what to reverse. The backup disk image can also be used if i need to revert to a very old state, instead of having to start from scratch.
The snapshot functionality which exists in both VMWare and Microsoft's Virtual PC/Server is good enough to consider Virtualization for scenarios where you think you might do breaking changes, which may not be that easy to revert.
From what I know, there is nothing like Parallels on Mac, but rather for work instead of testing.
The integration (with "coherence", your VM is not running "in a window" of your host system, all programs in the guest system have their proper window in the host system) is splendid and let's you fill all (ALL!) gaps:
My coworker has it configured that Outlook (there is nothing like Outlook for MacOsX) in Windows pops up when he clicks on a "mailto:"-link on a web page, browsed with Firefox on Mac !
In the other direction, if he get's send a PDF, he doubleclicks the attachment in Outlook (in Windows) which opens the PDF-File in the Mac-buildin PDF-viewer.
VirtualBox also offers this window-separation possibility (at least when windows is running in the VM on Linux), which is really useful for work.
For testing etc. of course, there is nothing like a cleanly separated environment.
We have a physical server dedicated to hosting virtual machines in our development environment. The virtual machines are brought up and torn down on a regular basis and are used for testing software on known Standard Operating Environments.
It is also really helpful when we want an application to run on a domain that is different to the development environment.
Also, the organisation I am working for are in the planning stage to create a large virtual testing ground. This will be a large grid of machines, sitting on it's own network, and all of the organisations' internal staff, contractors and third-party vendors will be able to stage their software for testing purposes prior to implementing into the production environment. The virtual machines will reflect the physical machines in the production environment.
It sounds great, but everyone's a bit skeptical: This is a Government organisation... Bureaucracy and red-tape will probably turn this into a big waste of time and money.
If we are using Virtual machine (vpc 2007,Virtual Server 2005,VMWare application etc..)
1.We can run multiple operating systems(windows98,2000,XP,Vista,Windows Server 2003,2008,Windows 7/linux/solaris) on a single server
2.We can Reduce hardware costs & Data Center Space
3.We can Reduce power & AC cooling cost.
4.We can reduce admin resource,
5.We can reduce Application Cost
6.We can run ADS/DNS/DHCP/Exchange/SQL/Sharepoint Server/File Server...etc
Is anyone using Virtual PC to maintain multiple large .NET 1.1 and 2.0 websites? Are there any lessons learned? I used Virtual PC recently with a small WinForms app and it worked great, but then everything works great with WinForms. ASP.NET development hogs way more resources, requires IIS to be running, requires a ridiculously long wait after recompilations, etc., so I'm a little concerned. And I'll also be using Oracle, if that makes any difference.
Also, is there any real reason to use VM Ware instead of Virtual PC?
I've used VirtualPCs for a few years for development of some fairly hefty web apps without much problem. Lots of RAM is important. I keep my VPCs on an external USB drive and they perform great from there. This gives me the flexibility to take the drive with me if I need to do work somewhere else... just install VPC on a host plug in the USB drive and start coding.
For servers, we use VMWare and have had little to no trouble with it.
Recently I went back to working on my local machine as you lose the benefit of dual monitors with VPCs, and I don't need to be as mobile as I used to.
Virtual PC 2007 is very fast esp on a CPU that has hardware support for VM's. 3GB RAM a must for anything not small. XP makes a good guest OS, Vista works well as a host.
Thanks for all the answers. So RAM is the key.
As far as dual monitor capability, I found that I could use dual monitors, as long as one of those monitors was my actual machine. And that was what I wanted anyway.
Mike
As long as you have the resources (separate hard disk for the virtual machine, sufficient RAM), I don't see why you would have any problems.
If you are going to be using VPCs as a server...perhaps Hyper-V (http://en.wikipedia.org/wiki/Windows_Server_Virtualization) is something to look at.
Its pretty powerful, in how it lets you assign RAM / CPU Cores to a virtualized machine.