I have this project in laravel 6.0 stored locally in my laptop. I am just wondering if I can still run this same project to another local PC and use it as a server for local only. I know it would be better to have a dedicated server for this project but this is not a bigtime system. I just want to run it only in my property. Is it okay if the said server is with these specs?
Processor- Intel® Core™ i7-4770
MotherBoard- Gigabyte GA-Z97X Gaming 7 ATX
Cooler- Stock Cooler
RAM- 1x8gb ddr3 Team Elite
GPU- Asus RX 570 4gb ddr5
HDD- WD Green 1TB (100% Healthy)
PSU- Seasonic M12II 620watts fully modular
I don't know if I should I this here or to another site. Thanks if you could help me enlightened.
First of all, in addition to the hardware of the machine where you want to host the application, a more thorough study would be necessary such as the type of application you want to host and the data traffic that you will have. In addition to the type of database manager you are going to use, if it will be hosted on the same machine or remotely.
As you said, I understand that it is for domestic use and I imagine that you will use MySql hosted on the same machine (correct me if I am wrong), so with the hardware you described it should not cause you problems.
Hope this can help you
Your hardware should suffice, however if you can manage it, try setting it up on a Linux command only OS. Most importantly—if possible—replace your HDD with an SSD, even if it is a small one, it will improve a lot the performance of your application.
I want a Windows 10 x64 Professional hosted on AWS, is that possible? And if so, how might one go about it?
To expound.
I just want a real windows 10 environment hosted remotely with static IP address so i can use it like a personal computer + server for some dev stuffs.
This is likely what you are looking for:
https://aws.amazon.com/workspaces/
Amazon WorkSpaces is a managed, secure cloud desktop service. You can
use Amazon WorkSpaces to provision either Windows or Linux desktops in
just a few minutes and quickly scale to provide thousands of desktops
to workers across the globe. You can pay either monthly or hourly,
just for the WorkSpaces you launch, which helps you save money when
compared to traditional desktops and on-premises VDI solutions. Amazon
WorkSpaces helps you eliminate the complexity in managing hardware
inventory, OS versions and patches, and Virtual Desktop Infrastructure
(VDI), which helps simplify your desktop delivery strategy. With
Amazon WorkSpaces, your users get a fast, responsive desktop of their
choice that they can access anywhere, anytime, from any supported
device.
and this is how you can give it a static ip:
https://aws.amazon.com/premiumsupport/knowledge-center/associate-elastic-ip-workspace/
Edit:
Amazon WorkSpaces now offers bundles that come with a Windows 10
desktop experience, powered by Windows Server 2016. Amazon WorkSpaces
Windows 10 bundles provides you an easy way to move users to a modern
operating system, while also simplifying licensing. Amazon WorkSpaces
continues to offer bundles that come with a Windows 7 desktop
experience, provided by Windows Server 2008 R2. You can also run
Windows 7 and Windows 10 Enterprise operating systems with Amazon
WorkSpaces if your organization meets the licensing requirements set
by Microsoft.
#BrownChiLD
You can create your own AMI on AWS. Steps are below:
1. create the machine on your system by using vmware wokrstation or hyper-v
2. Export the VM
3. Upload it to S3 bucket
once your vm is uploaded to S3, follow the steps on the below link
https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html#import-vm-image
At present time the only way to achieve what you want is by spinning your own Win10 instance assigning the static internal IP while creating it or by adding an Elastic IP if it's in an Internet Gateway enabled subnet.
It's not that convenient, you'll need to set up the environment yourself, including Security Groups, ACLs, etc to allow a bit of security and connecting using RDP will be a bit of a pain (beside doing so over internet isn't exactly advisable). You might start thinking about Chrome Remote Desktop or even Teamviewer.. and will be very pricey running it. First things first, apparently there's no Win10 available as AMI, so you'll need to deploy it yourself. Once running you'll need to license it. A type suitable for this could cost around 80$ per month.. unreserved.
Using AWS Workspaces isn't really an option: besides it is not "Windows 10" but Windows server 2016 (I needed WSL, which has been introduced with Server 2019 so, no joy), the only way to have a proper Win10 is using BYOL but... (cit from FAQ) :
You need to commit to running 200 Amazon WorkSpaces in a region per month on hardware that is dedicated to you. If you want to bring your own Windows desktop licenses for graphics use cases, you need to commit to at least 4 monthly or 20 hourly GPU-enabled WorkSpaces.
:-/
Amazon WorkSpaces is a virtual desktop that runs on AWS but you connect through an Amazon client software that acts a lot like virtualbox, except the OS that you're using is not on your local machine. So it's more like a Thin Client environment over the internet. I believe the OS through Workspaces is managed by AWS as far as patching and updates through a software called A.C.M.E. (Amazon Client Management Engine).
https://youtu.be/jsqI7KU3S8I
Amazon EC2 instances also provide Windows instances that you would connect through an RDP connection. You'll have to manage the patching and updates yourself though.
Here's a link for your reading pleasure
https://aws.amazon.com/windows/resources/licensing/
I have a cluster of machines running windows server 2012R2.
I would like to manage them with mesos.
To the best of my knowledge, microsoft is actively contributing to mesos (DC/OS) and will support containers natively on windows server 2016. Furthermore, it looks like there is another type of container flavour using hyper-v.
I can run my mesos masters on linux hosts. However I need my slaves on windows server 2012R2 hosts. It is not clear to me which technologies are already available (and production-ready) for my windows server version.
What are my options to use mesos to manage the resources of my windows server machines ?
Is the mesos-agent for windows (server 2012 R2) production ready ?
Can I use containers (hyper-v or docker) ? If not, is the resource isolation working in Windows (in linux you can use cgroups) ?
Can I run any framework I like or there are some not compatible with windows ?
Mesos version 1.0.0 was recently released that allows you to run the slave and launcher on windows. Not the master unfortunately. Its still Linux, but it doesn't really ever need to be Windows? The slave was the important bit for bringing Windows machines into the Mesos domain.
I've just been investigating using the Mesos-Slave on windows. Pleased to say that it appears to be working OK (this opinion is subject to change as I'm still testing it). Production ready is something any business would have to decide for themselves.
Mesos have always had their own isolation technology, interestingly they have redone their own containerizer implementation and this now takes a number of container image formats, so you can use your Docker images as well as a few others, so this is going to suit you. There was a good presentation on this at MesosCon https://www.youtube.com/watch?v=rHUngcGgzVM
Docker's been stealing the show to some extent. But if you use Mesos-Agent, Windows 2016 and its container technology (Docker) isn't needed and therefore it should run on Windows 2012. I've not got around to trying this yet but its definitely a test worth trying, it opens up deployment options. Anyone?
One thing to remember about containers, they are not VM's. The guest image must be a derivative of the hosts OS, you can't run a Linux image on a Windows machine. Causing me a headache, I can't use servernano at the moment, so my image sizes are 4Gb+, the initial deploy time is hours.
I need to install linux from existing VMware VMDK on EC2. For first time I can do this manually, later I will need to do this in automated way.
Could you please help me with link to relevant documentation. Also any tips and experiences are welcome.
Why do I need this?
At my company developers and QA are running our PHP apps on a virtual machine hosted on local machine. We want to move these virtual machines to the cloud, so each developer can easily set up a sandbox in simple web interface.
Amazon does not officially support importing Linux. However, an article from 2008 claims it can be done. If you try this, note this URL as well.
Finally, an AWS employee posted this too:
You can use ec2-import-volume to turn a local disk in a RAW, VMDK or VHD file format into an EBS volume in EC2. This turns a full disk, with MBR, into an EBS disk. If the guest is PV, with the Xen PV drivers installed, you could take a snapshot and create an AMI from that snapshot, inserting the correct AKI.
Follow the instructions of creating your own AMI. Also check out the following articles on EBS volumes: article1, article2. Here's some steps on how to create EBS-backed AMI instance.
You will have to manually create your own images. The ec2 api tools do not support linux/esxi 5 images. I just found out after spending 2 hours on a vmware linux to amazon port.
Recently the buzz of virtualization has reached my workplace where developers trying out virtual machines on their computers. Earlier I've been hearing from several different developers about setting up virtual machine in their desktop computers for sake of keeping their development environments clean.
There are plenty of Virtual Machine software products in the market:
Microsoft Virtual PC
Sun VirtualBox
VMWare Workstation or Player
Parallell Inc's Parallells Desktop
I'm interested to know how you use virtualization effectively in your work. My question is how do you use Virtual Machines for day-to-day development and for what reason?
I just built a real beefy machine at home so that I could run multiple VMs at once. My case is probably extreme though, but here is my logic for doing so.
Testing
When I test, particularly a desktop app, I typically create multiple VMs, one for each platform that my software should run on (Windows 2000/XP/Vista etc). If 32 and 64 bit flavors are available, I also build one of each. I also play with the VM hardware settings (e.g. lots of RAM, little RAM, 1 core, 2 core, etc). I found plenty of little bugs this way, that definitely would have made it into the wild had I not used this approach.
This approach also makes it easy to play with different software scenarios (what happens if the user installing the program doesn't have .NET 3.5 sp1? What happens if he doesn't have XXX component? etc?
Development
When I develop, I have one VM running my database servers (SQL2000/2005/2008). This is for two reasons. First, it is more realistic. In a production environment your app is probably not running on the same box as the db. Why not replicate it when you develop? Also, when I'm not developing (remember this is also my home machine), do I really need to have all of those database services running? Yes, I could turn them on and off manually, but its so much easier to switch a VM on.
Clients
If I want to show a client some web work I've done, I can put just a single VM into the DMZ and he can log into the VM and play with the web project, while the rest of my network/computer is safe.
Compatibility
Vista64 is now my main machine. Any older hardware/software I own will not play nicely with that OS. My solutions is to have Windows XP 32 as a VM for all of those items.
Here's something that hasn't been mentioned yet.
Whenever a project enter maintenance mode (aka abandonded), I create a VM with all the tools , libraries, and source code necessary to build the project. That way if I have to come back to it a year later, I won't bet bit in the ass by any upgraded tools or libraries on my workstation.
When I started at my current company, most support/dev/PM staff would run Virtual PC with 1-3 VMs on their desktop for testing.
After a few months I put together a proposal and now we use a VMware ESXi server running a pool of virtual machines (all on 24/7) with different environments for our support staff to test customer problems and reproduce issues on. We have VMs of Windows 2000/XP/Vista with each of Office 2000/2002/2003/2007 installed (so that's 12 VMs) plus some more general test VMs, some Server 2003/2008 machines running Citrix, Terminal Services, etc. Basically most of the time when we hit a new customer configuration that we need to debug, and it's likely other customers also have that configuration, I'll setup a VM for it. (eg. We're only using three 64-bit VMs at the moment - mostly it's 32bit)
On top of that the same server runs a XP VM that I use for building installers (InstallShield, WiX) debugging (VS 2005) and localization (Lingobit) as well as a second VM that our developers use for automated testing (TestComplete).
The development and installer VM have been allocated higher priority and are both configured as dual-cpu VMs with 1Gb memory. The remaining VMs have equal priority and 256-1Gb RAM.
Everything runs on a dual-quad-core Xeon with 8Gb of RAM running ESXi and hardware raid (4x1Tb RAID10)
For little more than US$2.5k investment we've improved productivity 10 fold (imagine the downtime while a support lackie installs an older version of office on their desktop to replicate a customer problem, or the time that I can't use my desktop because we're building installers). Next step will be to double the RAM to 16Gb as we add more memory hungry Server 2008 and Vista VMs.
We still have the odd VM on our desktops (I've got localized versions of Windows, Ubuntu and Windows 7 running under VMware Workstation for example) but the commonly/heavily used configurations have been offloaded to a dedicated server that we can all remotely connect into. Much, much easier.
Virtualisation (with snapshots or non-persistent disks) is really useful for testing software installation in a known clean configuration (i.e. nothing left over from previous buggy installs of your software).
Having your development box on a single file (with a Virtual Machine) will make it much easier to backup and restore if an issue occurs.
Other than that, you can also carry your portable development box around different machines, since you aren't restricted on that single particular machine you usually work on.
Not only that, but you can test on different Operating Systems at once, with a single OS installed on a each Virtual Machine file you have.
Believe me, this will save quite a hassle when doing the jobs I mentioned above.
Another nice use case for VMs is to create a virtual network of machines. For example you can bring up machines running the different tiers of your application stack, each running in its own VM. Think of it as a poor man's datacentre.
These VMs can also appear available on your physical network, so you can use RDP or similar to get a remote terminal session with them.
You can have a beefy machine (lots of memory) running these VMs, while you access them remotely from another machine such as a laptop, or whichever machine you have with the best screen.
I use a VM under Windows to run Linux. Even though there's already a version of emacs for windows, using it in Linux just feels more gratifying for some reason.
Maintaining shelved computers
I have the situation where schools in my region are closed down but their finance system has to be maintained for up to 2 years to ensure all outstanding bills are paid. This used to be handled by maintaining the hardware from the mothballed schools which had some problems:
This wasted scarce hardware resources and took up a lot of physical space.
Finance officers had to be physically present at the hardware to work on each system.
Today I host each mothballed school on its own virtual box inside a single physical host. Each individual system is accessed by rdp on the IP number of the host, but with its own port number and the original security of each school is maintained.
Finance officers can now work on the mothballed schools without having to travel to where they are physically located, there is more physical space in the server room and backup of all the mothballed schools at once is a simple automated process.
With each mothballed school in its own vbox there is no way for cross contamination of data between systems. Many thousands of dollars worth of hardware is also freed up for redeployment.
Virtualisation appears to be the perfect solution to this problem.
I used the Virtualization approach using VMWare Server when the task in front of me was to test a clustered environment of WebSphere Application Server. After setting up VMWare Server i created a new virtual machine and did all the software installations that i would need like WebSphere App Server, Oracle, WebSphere Commerce etc, after which i shutdown the VM, and copied over the virtual hard disk image to two different files, one as a clone VM and another as a backup.
Created a new VM and assigned the one of the copied disk images, so i got two systems up and running now which allowed me to test the same scenario of a clustered environment. I took a snapshot of the VM through VMware and if i goofed up with any activities i would revert the changes to the snapshot taken thereby going to the previous state and increasing my productivity instead of having to find out what to reverse. The backup disk image can also be used if i need to revert to a very old state, instead of having to start from scratch.
The snapshot functionality which exists in both VMWare and Microsoft's Virtual PC/Server is good enough to consider Virtualization for scenarios where you think you might do breaking changes, which may not be that easy to revert.
From what I know, there is nothing like Parallels on Mac, but rather for work instead of testing.
The integration (with "coherence", your VM is not running "in a window" of your host system, all programs in the guest system have their proper window in the host system) is splendid and let's you fill all (ALL!) gaps:
My coworker has it configured that Outlook (there is nothing like Outlook for MacOsX) in Windows pops up when he clicks on a "mailto:"-link on a web page, browsed with Firefox on Mac !
In the other direction, if he get's send a PDF, he doubleclicks the attachment in Outlook (in Windows) which opens the PDF-File in the Mac-buildin PDF-viewer.
VirtualBox also offers this window-separation possibility (at least when windows is running in the VM on Linux), which is really useful for work.
For testing etc. of course, there is nothing like a cleanly separated environment.
We have a physical server dedicated to hosting virtual machines in our development environment. The virtual machines are brought up and torn down on a regular basis and are used for testing software on known Standard Operating Environments.
It is also really helpful when we want an application to run on a domain that is different to the development environment.
Also, the organisation I am working for are in the planning stage to create a large virtual testing ground. This will be a large grid of machines, sitting on it's own network, and all of the organisations' internal staff, contractors and third-party vendors will be able to stage their software for testing purposes prior to implementing into the production environment. The virtual machines will reflect the physical machines in the production environment.
It sounds great, but everyone's a bit skeptical: This is a Government organisation... Bureaucracy and red-tape will probably turn this into a big waste of time and money.
If we are using Virtual machine (vpc 2007,Virtual Server 2005,VMWare application etc..)
1.We can run multiple operating systems(windows98,2000,XP,Vista,Windows Server 2003,2008,Windows 7/linux/solaris) on a single server
2.We can Reduce hardware costs & Data Center Space
3.We can Reduce power & AC cooling cost.
4.We can reduce admin resource,
5.We can reduce Application Cost
6.We can run ADS/DNS/DHCP/Exchange/SQL/Sharepoint Server/File Server...etc