Get AI summaries of any video or article — Sign up free
Virtual Machines Pt. 2 (Proxmox install w/ Kali Linux) thumbnail

Virtual Machines Pt. 2 (Proxmox install w/ Kali Linux)

NetworkChuck·
5 min read

Based on NetworkChuck's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Proxmox provides a type 1 hypervisor setup by installing Proxmox Virtual Environment directly on hardware, replacing the existing OS and avoiding type 2 resource-sharing limits.

Briefing

A type 1 hypervisor setup with Proxmox turns an old PC into a dedicated “computer inside a computer” platform—built for running multiple virtual machines and Linux containers without the resource limits and OS-sharing overhead of type 2 tools like VirtualBox. The practical payoff is bigger, more serious lab capacity: more CPU and RAM available per guest, larger deployments, and a closer match to how enterprise virtualization is typically run.

The core shift is architectural. With type 2 hypervisors, the host operating system stays in the middle, sharing its resources with virtual machines. With type 1, Proxmox runs directly on the hardware, so the existing OS (for example, Windows 10) is wiped and replaced by Proxmox. That “no layer between hardware and hypervisor” design makes the host OS lightweight and keeps the system focused on delivering virtualized compute. The pitch is also career-oriented: learning a type 1 hypervisor is framed as a resume-building skill, with VMware ESXi cited as the common corporate alternative—though ESXi often struggles with consumer hardware.

After the motivation, the transcript walks through a full install workflow. Requirements are straightforward: a spare machine (old laptop or desktop), at least an 8GB USB flash drive, and—crucially—an Ethernet connection (Proxmox won’t use Wi‑Fi). The process starts by downloading Proxmox Virtual Environment ISO images from proxmox.com, then writing the ISO to the USB drive using Rufus on Windows (switching to “dd” mode for reliability). Booting into the installer may require BIOS/UEFI changes so the system starts from the USB hard disk.

During installation, the transcript emphasizes network setup: Proxmox needs DHCP to assign an IP, and if the Ethernet cable isn’t connected early, the installer may require setting a static IP manually. Once installed, management happens through a web UI at https://<IP>:8006, using the default username root and the installation password. A certificate warning appears if no SSL certificate is configured, but the system is still functional.

A key “make it usable” step follows: expanding storage. By default, Proxmox may not allocate the full disk to virtual machine storage, especially on single-drive installs using local-lvm. The workflow removes the local-lvm volume and then runs three LVM/resize commands in the Proxmox shell to reclaim and expand the filesystem, effectively unlocking the majority of the drive for VM and container use.

With storage ready, the lab becomes real: ISO images are uploaded (examples include Kali Linux, Ubuntu, and Windows Server 2019). Virtual machines are created with chosen VM IDs, storage size (e.g., 50GB), CPU cores, RAM (e.g., 4096MB), and bridged networking. The transcript also adds containers using LXC: it downloads container templates (e.g., Ubuntu 20) and creates a fast Ubuntu container that boots in seconds.

Finally, it addresses practical “laptop server” concerns. Commands and config edits adjust systemd login settings so closing the laptop lid doesn’t suspend the system, and GRUB settings add a screen timeout so the display sleeps after a few minutes. The result is a persistent, headless-style homelab that can run VMs and containers even when the laptop is tucked away.

Cornell Notes

Proxmox turns spare hardware into a type 1 hypervisor lab by installing Proxmox Virtual Environment directly on the machine, replacing the existing OS. The setup is managed through a web UI at https://<IP>:8006 and supports both full virtual machines (Kali Linux, Ubuntu, Windows Server 2019) and Linux containers via LXC templates (e.g., Ubuntu 20). A major usability step is reclaiming disk space by removing local-lvm and resizing the remaining storage so virtual machines can use most of the drive. The workflow also includes practical laptop-lab tweaks so closing the lid doesn’t suspend the system and the screen can sleep after a set time. This matters because it creates a more enterprise-like, scalable virtualization environment than type 2 hypervisors on a typical desktop OS.

What makes Proxmox a “type 1” hypervisor, and why does that matter for running multiple labs?

Type 1 means Proxmox runs directly on the hardware, with no host operating system sitting in between. That removes the resource-sharing bottleneck typical of type 2 setups (like VirtualBox), where the host OS consumes CPU/RAM/storage and then has to allocate leftovers to guests. With Proxmox, the host layer is lightweight and the system is focused on delivering VM/container resources, enabling larger and more numerous virtual machines and bigger lab environments.

Why does the install workflow insist on Ethernet instead of Wi‑Fi?

Proxmox requires a wired network connection for installation and initial IP assignment. The transcript notes that Proxmox will not use Wi‑Fi, so an Ethernet cable is needed. If Ethernet isn’t connected early, the installer may not pull DHCP information and the user may need to set a static IP during setup.

How does the transcript handle the common “not all disk space is available” problem after Proxmox installation?

It removes the local-lvm storage volume and then runs three shell commands to delete the logical volume and resize the main filesystem: lvremove /dev/pve/data, lvresize -l +100%FREE /dev/pve/root, and resize2fs /dev/mapper/pve-root. Afterward, the local storage shows a much larger usable capacity (e.g., expanding from a smaller local-lvm allocation to hundreds of gigabytes available for VMs/containers).

What’s the difference between creating a VM and creating an LXC container in Proxmox?

VMs are created from ISO images and boot as full guest operating systems (the transcript uses Kali Linux, Ubuntu, and Windows Server 2019 examples). LXC containers use prebuilt templates (ct templates) and start much faster because they share the host kernel model rather than booting a full OS. The transcript demonstrates downloading an Ubuntu template and creating a “fast ubuntu” container that becomes available quickly.

How does the transcript keep a laptop-based Proxmox lab running when the lid is closed?

It edits systemd login configuration using nano on /etc/systemd/logind.conf to change lid switch behavior from suspend to ignore (including docked/undocked handling). Then it restarts the login service with systemctl restart systemd-logind. A second tweak edits /etc/default/grub to add console blank=300 so the screen sleeps after about five minutes.

Review Questions

  1. What architectural limitation of type 2 hypervisors does type 1 Proxmox avoid, and how does that affect VM capacity?
  2. Why is reclaiming storage (removing local-lvm and resizing) a prerequisite before deploying many VMs or containers?
  3. In what situations would you prefer an LXC container over a VM, based on the transcript’s examples?

Key Points

  1. 1

    Proxmox provides a type 1 hypervisor setup by installing Proxmox Virtual Environment directly on hardware, replacing the existing OS and avoiding type 2 resource-sharing limits.

  2. 2

    Ethernet is required because Proxmox won’t use Wi‑Fi; missing Ethernet early can force manual static IP configuration during installation.

  3. 3

    Writing the Proxmox ISO to USB on Windows is done with Rufus, using “dd” mode for the ISO-to-flash process.

  4. 4

    After installation, reclaim disk space by removing local-lvm and resizing the main logical volume so virtual machines can use most of the physical drive.

  5. 5

    Proxmox supports both VMs (from uploaded ISO images like Kali Linux, Ubuntu, and Windows Server 2019) and LXC containers (from ct templates like Ubuntu 20).

  6. 6

    A laptop-lab can stay online by configuring systemd-logind to ignore lid switch events and by adjusting GRUB settings to control screen blanking.

  7. 7

    Management happens via a web UI at https://<IP>:8006 using root credentials set during installation, with certificate warnings handled by continuing to the interface.

Highlights

Type 1 Proxmox runs directly on the hardware, so the existing OS is wiped and the hypervisor focuses resources on guests—making bigger labs feasible than with type 2 tools.
The “storage unlock” step removes local-lvm and uses LVM resize commands so the majority of a single-disk install becomes available for VM storage.
Proxmox can spin up full VMs from ISOs and also launch LXC containers from templates in seconds, enabling a fast mix of lab workloads.
Laptop usability matters: systemd-logind lid settings prevent suspension, and GRUB console blank=300 helps the display sleep without stopping the lab.

Topics

  • Type 1 Hypervisor
  • Proxmox Installation
  • LVM Storage Expansion
  • Virtual Machines
  • LXC Containers

Mentioned

  • VM
  • LXC
  • ISO
  • BIOS
  • UEFI
  • DHCP
  • IP
  • GRUB
  • API
  • SSL
  • VMware
  • ESXi