The introduction of the Copilot key marks the first significant change to the Windows PC keyboard in nearly three decades. We believe it will empower people to participate in the AI transformation more easily. The Copilot key joins the Windows key as a core part of the PC keyboard and when pressed, the new key will invoke the Copilot in Windows experience to make it seamless to engage Copilot in your day to day*. Nearly 30 years ago, we introduced the Windows key to the PC keyboard that enabled people all over the world to interact with Windows. We see this as another transformative moment in our journey with Windows where Copilot will be the entry point into the world of AI on the PC.
Yusuf Mehdi on the official Windows blog
Your next laptop will come with an “AI” key next to the spacebar. Yes, Microsoft and Windows OEMs are really going to be doing this. Your laptop will come with a dedicated copyright infringement key that will produce utter nonsense and misinformation at the push of a key.
This is pure and utter insanity.
I don’t know about insanity, I think its a screw google button, and possibly Exhibit A in their next antitrust trial.
I think of AI like a Car. Its a faster way of doing things than we have without it, but it is in no way removing responsibility from the end user. And most people should have to take a test before using it.
I heard some rationale for it via radio just recently. The idea that AI is so useful it will be needed most of the day, justifying a purpose built button. However, all of the examples seemed to indicate it would make more sense to just build it inside of Microsoft office. They wanted to use AI to do something with a spreadsheet or email, well then the AI should be inside of the spreadsheet/email and have better context to reduce annoying prompt authoring.
My keyboard has a lot of extra meta keys. I don’t have a problem with them and they can be mapped to something useful (launch a browser/calculator/whatever). However what they actually show in the video is that this replaces the right control key, which I’ve already been using to input hotkeys For example CTRL-T to open up a new tab or CTRL-S to save, etc become more awkward to input using the left hand alone. I use that control key for text editing (control-arrow key) so much in fact that it’s actually worn down. I’d hate to loose such a useful key…hopefully it can be remapped back to a CTRL key in the bios, but OEMs aren’t always good with this like this
I cannot edit, but I wanted to say: I’m not against adding a new application key, but putting it there is both pointless and it interferes with other keyboard functionality. It would have been far more logical to add it as another “application” key just like browser, media player, etc.
The 2024 Dell XPS already has it (Or, will when its released). From Ars Technica’s look at the new lineup:
Lame.
Drumhellar,
Thanks for the link showing a picture of the full keyboard. Here’s another article about the new key specifically.
https://arstechnica.com/gadgets/2024/01/ai-comes-for-your-pcs-keyboard-as-microsoft-adds-dedicated-copilot-key/
This is will likely be a windows requirement and 100% of OEM manufacturers will probably end up incorporating it for that reason.
It absolutely sucks that they’re replacing the CTRL key with a launcher button that can’t be reprogrammed!
…to compensate this, Fn keys are removed.
satai,
The picture does not show the left hand side of the keyboard, where the FN keys often are.
It’s very clearly taking the space of right ctrl.
Are you looking at a different source? If so could you link it.
The Real question is “what will Linux distro map it to?”
Let’s assume it becomes sufficiently common in OEM laptops, and 3rd party kyb, what will Ubuntu/Fedora/Debian use the key for? Will they (or the DE) look to mimic behaviour windows users will grow to expect, or leave it as a dead key?
Adurbe,
At least in KDE, all of the extended application keys are mapped to something sensible by default.
IIRC in XFCE they were unmapped by default but could be mapped.
Someone else can report what Gnome does.
Given that they took away the Right Control key, that would be a natural key to map it back to assuming the keyboard controller still implements it as a proper modifier key.
Remap it to something actually useful such as inputting an A followed by a I. This could save keystrokes when typing words such as pain or main and increase productivity ever so slightly.
gagol2,
+1, that would be really funny
Why not try going all in with a stenographer keyboard?
My keybord does not even have a windows key.. and as long as my model m is treated well, it will probably last another 40 years just fine. not even a scratch after using it continously for almost half a century. i have swapped out ine pring though during that time (enter key)
Windows key is great way of switching between terminal and Chromium on Ubuntu Linux
Actually one of two useful additions to keyboards in recent 40-years or so. Another is a roller for Volume.
Oh ffs, classifying the weights stored in a neural network as “copyright infringement” would require such a wide definition of copyright and such a narrow definition of fair use that’s beyond even the MPAA’s wildest dreams.
I agree with the “misinformation” part though: LLMs will give you a completely wrong answer with the same confidence they will give you a completely correct answer.
kurkosdr,
Yeah, it’s a stretch.
This is a really curious topic, because in principal a neural net could be trained to differentiate truths and falsehoods given an accurate dataset, but therein lies the problem. Where do you get such a dataset? The internet, while very comprehensive, contains much misinformation. Even if we take AI completely out of it, humans have no inherent mechanism for discerning the truth either
Children assume things are true for their first several years of life because they have zero frame of reference. As their neural pathways are formed, new information is accepted or rejected based on the existing pathways. But it needn’t be true. A child that is fed enough misinformation may not be able to accept facts that contradict the world-view ingrained into their brains.
At least with science, things are testable, but whenever we’re forced to rely on human records, that can technically be fabricated and historical accounts are re-framed by the victors. An AI that is gaslit couldn’t tell true or false apart any more than a human could given the same information. The difference is that humans have life experience whereas an AI has none; everything learned is learned vicariously.
> This is a really curious topic, because in principal a neural net could be trained to differentiate truths and falsehoods given an accurate dataset, but therein lies the problem.
Could they? That presupposes that there are sufficient correlations between text and its truth value to reliably identify falsehoods; a dubious assumption.
This, again. LLMs reframe the problem of “Can we teach a computer common sense?” to a problem of “Can we find a training model so perfect that a computer never needs any common sense?”
Much like the case of alchemists reframing the problem of “Can I magically create gold out of thin air?” to a problem of “Can I find some kind of liquid that converts lead into gold?”, you have traded one impossibility for another. But if you don’t know both are impossibilities, the option that is not obviously impossible becomes really tempting.
Note: alchemists of old didn’t know that to convert lead to gold you need to convert one element into another, which cannot be done with chemical reactions and requires nuclear reactions (aka using immense amounts of energy in a nuclear reactor, to the point it’s less economical than mining the gold), that’s why they kept trying.
However, it’s worth noting that alchemists’ experiments in metallurgy and chemistry gave us other useful metal alloys (and even gold look-alikes), so it wasn’t a total loss. Similarly, LLMs will fake intelligence well enough to serve as assistants, but they will still lack common sense and critical thought, which means they will still give you wrong answers with the same confidence they give right answers and require human supervision.
kurkosdr,
Any time these questions come up I like to push back on the notion that human are inherently superior. I think a lot of what we consider “common sense” might actually consist of very highly evolved pattern recognition. This pattern recognition encompass everything from basic arithmetic to morality and politics.
Although we haven’t developed “general AI” yet, the better NNs become at emulating human output, the less I believe we are justified in claiming that we are somehow more intelligent.
Mote,
Of course, that’s the point. The accuracy of output can’t be better than the input. Garbage-in, garbage-out.
Even though we’re talking about this in the context of AI, this philosophical concern still applies even when we take AI out of the equation. For example, those who consider the bible true assume the bible is true and it shapes their world view. One could train a neural net based on biblical text, but it doesn’t actually mean the biblical accounts were ever true.
The copilot key in their example image looks pretty obnoxious.
That’s the point, so you accidentally hit that key and hopefully start using it.
I might be mistaken, but I recall they already tried that some years ago, with a Cortana key. Now, Cortana was such a flop they might have pruned Internet of its memory for good — Searching for it, most results gave me “how to disable Cortana in Windows 10”.
Image search seems to say it was just Toshiba trying that.
Why can’t they just reuse the useless App Menu key? This is just MS trying to redefine how our workflow should be. This is their way of forcing a snake oil feature down our throats.
How is it that the kids put it these days? “Obvious engagement-bait is obvious” or something to that effect? [looks at article comment count] But I guess if it works, right?