The FreeBSD Release Engineering Team is pleased to announce the availability of FreeBSD 11.4-RELEASE. This is the fifth and final release of the stable/11 branch.
Read the announcement for more information.
The FreeBSD Release Engineering Team is pleased to announce the availability of FreeBSD 11.4-RELEASE. This is the fifth and final release of the stable/11 branch.
Read the announcement for more information.
Support for 11.4 will last until at least the end of September 2021. That’s 10 days short of 5 years of support for the 11.x series, which isn’t bad at all.
I’ll fire this one up on my laptops, but I don’t expect much. FreeBSD has always had trouble with the WiFi hardware on my Dell laptop (Not an issue with the chipset, just my laptop) making it unreliable. And the WiFi on my MacBook Pro isn’t supported at all.
So, sadly, for the past few years my only experience with my favorite OS has been running it in a virtual machine.
It does a fine job at that, though. It does run as a Type 2 virtual machine in Hyper-V – meaning, native support for the virtual IO devices rather than using emulated hardware, as well as other features. I mostly use VMware, though.
You are doing it the wrong way!
You can run a virtual machine (with bhyve) inside of your FreeBSD machine. This vm runs linux, which has support for your wireless chipset, and act as a router for the FreeBSD host on which it is running. Problem solved!
tingo,
I’m sure that probably works, I’ve used a linux VM under virtualbox to create a router to overcome overly aggressive vpn software, which actually worked.
It’s a creative workaround, however actually being dependent on a linux vm to provide connectivity on the host just feels wrong to me and I wouldn’t be happy with that dependency.
I’ve got a few days off coming up, and I really might try it since my Macbook supports VT-D. Thing is, I think by the time I get a chance to I won’t be able to come back to this thread and post the results. I imagine the commenting period will be done by then…
Drumhellar,
Yeah, I’d be interested in hearing how it goes. I’m more familiar with the linux side, but close to clueless with freebsd.
You could try submitting it in article format, but without knowing whether it will get published it could be a lot of wasted effort. I guess if the comments are locked you can either mention it in an off topic article or wait for a future article that is tangentially related, haha.
Not that you need another solution, but if you cannot get that configuration to work for any reason you could run freebsd under linux as well. You’d loose novelty points, but at least that should be easy.
Hahah. This would totally work, too.
I’m half tempted to do it just for the hell of it.
Would this really work? I don’t use FreeBSD and know nothing about bhyve.
In my mindset a virtual machine always has lesser or at most equal hardware access to the host OS. So if the host OS has no support it couldn’t pass it through correctly to the virtual and the virtual certainly couldn’t pass it back with more hardware support. HyperVisors can do impressive things of course but this would blow my mind
(of course running a full blown virtual just for getting WiFi to work is ridiculous, but we are not talking about efficiency here but technical capability)
avgalen,
I don’t know have any experience with any of this on freebsd, however CPUs with an IO-MMU (most CPUs support intel VT-d or amd IO-virtualization these days) are able to isolate physical PCI devices and pass them through to the guest OS.
This is how you would do this with KVM.
https://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM
I haven’t seen this done with network card. Obviously normally the host would act as a bridge or connect the guest to a VLAN using the host stack….but in the event the host doesn’t support the card it’s still able to pass it through using IO-MMU. It’d be more common to pass through graphics cards to the guest so you can experience full 3d acceleration (ie native performance GPU in a windows VM). In principal it should be possible to pass through any arbitrary PCI device since IOMMU enables the host to map the device into the VM’s address space. (you may recall IOMMU is necessary to mitigate numerous thunderbolt vulnerabilities, it does so using the same technique – moving the device into an isolated address space).
In addition to the physical card, you’d have to configure a virtual one and the linux VM would act as a router and/or bridge between them giving the host network access through the virtual NIC. This is similar to how VPN software works, but instead of a VPN software tunnel, it would be connected to the virtual machine.
USB pass-through would also be a possibility…
https://www.linux-kvm.org/page/USB_Host_Device_Assigned_to_Guest
I have used virtualbox to passthrough things like USB drives & webcams from a windows host to a linux guest. This works on a different level with the host OS managing the PCI USB controller but redirecting the USB messages into the VM. This functionality is not supported by the official windows drivers and the proprietary extentions used by virtualbox were not 100% stable for me a couple years ago, YMMV.
In either case, the host OS does not need to support the hardware itself (it must not handle the device for passthrough to work). The “raw” device is isolated and passed through.
Yes. bhyve (like any modern hypervisor today) has PCI passthrough, so as long as the hardware supports it (IO-MMU) it will work.