The new CPU configuration gives the new SoC a good uplift in performance, although it’s admittedly less of a jump than I had hoped for this generation of Cortex-X1 designs, and I do think Qualcomm won’t be able to retain the performance crown for this generation of Android-SoCs, with the performance gap against Apple’s SoCs also narrowing less than we had hoped for.
On the GPU side, the new 35% performance uplift is extremely impressive. If Qualcomm is really able to maintain similar power figures this generation, it should allow the Snapdragon 888 to retake the performance crown in mobile, and actually retain it for the majority of 2021.
At this point it feels like we’re far beyond the point of diminishing returns for smartphones, but with ARM moving to general purpose computers, there’s still a lot of performance gains to be made. I want a Linux-based competitor to Apple’s M1-based Macs, as Linux is perfectly suited for architecture transitions like this.
Remember, this is still a prototype, and reaching a working product will take a lot of effort, and might counter some of the performance benefits. They would still need “all the other parts” of the SoC, and scale up to meet real demand.
So I would be cautiously optimistic.
Side note: An ARM engineer’s take on RISC-V instruction set: https://news.ycombinator.com/item?id=24958423 (Yes, I know conflict of interest, etc).
This somehow ended up on the wrong article
As a power-efficient offload server for something like sccache, sure, but I doubt they’ll be able to leverage enough of that extra performance to offset the cost of emulating x86 for the games I like to play while taking a break, so my primary machine will remain x86.
Without getting into VM hypervisors and CPU architectures and system architectures the problem of companies using patent and copyright law to block transcoding (basically a translation and semi recompile/optimisation of executable code) artificially holds back end user options. Anyone playing games will put themselves at a disadvantage but it’s a general problem too for end users who have invested in software they are happy with.
In an ideal world I’d like to see Microsoft and to be fair every OS vendor enable or allow their OS to be run as a subsystem or via a translation layer.
Given how much manual work was needed to accomplish the static translation of things like Starcraft for the Pandora handheld, I’m doubtful it’d be that viable generally.
As far as software goes, I’m lucky on that front in that I’m fairly zealous about open-source. Games are really the only closed-source things I run outside DOSBox or an emulator for non-x86 platform like the SNES, Playstation, or N64.
I know transcoding products have been available before being quashed with litigation. It’s really only an on the fly and/or cached decompile and recompile. Most software will probably be okay with this but like you say there are going to be systems or individual pieces of software which won’t easily cooperate or cooperate at all. In some respects this is a task for the big compaies to organise but if it can be identified as a market failure there is a possibility for government intervention. In that respect I find putting a marker down for discussion is worth it.
I use Libreoffice and Firefox and Thunderbird so 90% of my use is covered by open source software. That remaining 10% is 90% of the effort though. The biggest hurdle I find at this point is the lack of userfriendliness.
I’m still baffled why Linux doesn’t have a subsystem/translation layer to parcel things up so Windows apps et al can run seamlessly either via emulation or system calls to an underlying Windows install. Didn’t OS/2 do this? Windows can do this but they bottled it with the latest rev of Linux on Windows.
OS/2 included a thing called Win-OS/2 that ran Windows applications. It was essentially a full copy of Windows, with some modified graphics drivers. It worked because Windows was running on the same CPU (no emulation needed); all OS/2 needed to do was supply a plausible DOS runtime that Windows could run on top of, and OS/2 included a DOS runtime from the start. Also note that IBM had access to Windows source code so they could ensure it worked.
The analog of that today would be something like running a VirtualBox VM and pointing it to a Windows partition, which it can do via a .vmdk file (a virtual disk that refers to real disk locations.) VirtualBox with its own guest additions also supports a “seamless windows” feature. These two things together would be very similar to Win-OS/2, although it is a full blown VM. Another thing today is using RDP’s seamless mode, so the video hardware exposed to the VM is fully disconnected from the display which is occurring across a network interface.
I hadn’t used VirtualBox’s seamless support until your message encouraged me to. It has some strange clipping when windows are moved since it needs to figure out what screen regions to take from the VM which has large lag. Unfortunately VirtualBox is only telling Windows about one monitor though, so all Windows windows need to stay on it, and my version of VirtualBox isn’t detecting Windows 10’s desktop correctly, but it does detect older versions fine. Still, it has potential – right now I have a command prompt from Windows 10 32 bit and Server 2003 64 bit side-by-side, and for the type of development I do this type of thing could be really useful and convenient.
@malxau
It’s been years since I toyed with OS/2 and I was too much of a dufus to get games working in OS/2 DOS properly so I gave up with it. It was nice when it worked though.
I’m not doing this at the moment as I’m currently Windows 10 only but I have set up Windows and Linux Mint with VirtualBox and Wine so I had the exact same set of applications running on both. Some were native like Libreoffice and Firefox on both platforms while Adobe Lightroom (32 bit only sadly as the 64 bit version doesn’t seem to want to cooperate) apart from web publishing stuff with some fiddling worked perfectly worked on Wine. I got so used to whatever OS I booted into (which for a few months was mostly Linix Mint) I would sometimes forget which OS I was using and go to do something like update or whatever and discover I was using the wrong OS.
It’s definately a faff but I agree it has potential.
I used RDP to access my old tower system which I was using as a fileserver until the mainboard died. I have since discovered Unison which is really amazing for backups and syncronising.
There was a Windows translation layer application some years ago for Windows 95? I think Microsoft bought them out before closing them down.
@HollyB, I don’t know which things you’re thinking of here. Microsoft bought Virtual PC from Connectix, although I don’t think it was killed off exactly – it was maintained and bundled with Windows 7, and Windows 8 included Hyper-V to replace it. To the extent it was killed off, there was a PowerPC Mac version which was bundled with Office 2004, but Microsoft never released one ported to Intel. Running Windows on an Intel Mac was never difficult, so it probably didn’t make commercial sense.
In terms of Win-OS/2 like environments for Win95, the closest I can think of is Merge/Win4Lin which provided an emulated DOS runtime and hosted Win95/98 on top of that on Linux. This wasn’t a complete VM and gave Windows access to the native Linux file system directly via its emulated DOS, which is exactly what Win-OS/2 did. Tenox recently dusted off Merge and there’s a bit of discussion about it at https://virtuallyfun.com/wordpress/2020/11/03/fun-with-openserver-and-merge/ . His next article after this one talks about WABI which is even more similar to Win-OS/2 although it was limited to Windows 3.x.
@malxau
Maybe Merge/Win4Lin was what I was thinking of? If it wasn’t this it was something very similar. Your link was useful. Thanks.
Without going through all the technical arguments from top to bottom the issue of market failure may counter the claim by companies that something is not commercially viable. That discussion can be very big so I’ll let this rest as I’ve already mentioned it.
I’m guessing anything really clever would need to be in the Linux kernel and subvert an installed Windows kernel and intercept all the calls. Would it need to hijack the HCI? I have no idea and am out of my depth. On none native platforms and transcoding a Windows installation rather than hosting via an emulator I have no idea whether this would work or not. I’m sure there are a lot of headaches in there.