Apple is working on a processor devoted specifically to AI-related tasks, according to a person familiar with the matter. The chip, known internally as the Apple Neural Engine, would improve the way the company’s devices handle tasks that would otherwise require human intelligence – such as facial recognition and speech recognition, said the person, who requested anonymity discussing a product that hasn’t been made public. Apple declined to comment.
It’s interesting – and unsurprising – that while Google is investing in server-side AI by developing its own custom AI hardware, Apple is apparently investing in keeping AI local. It fits right into the different approaches to privacy by these two companies, which is why I find this entirely unsurprising.
As a sidenote – isn’t it interesting how when new technologies come around, we try to offload it to a specific chip, only to then bring it back into the main processor later on?
Apple’s culture of secrecy prevents them doing any bleeding edge research. The top neuroscientists all have an academic background. They aren’t going to put up with rules preventing them publishing in journals or appearing at academic conferences.
Apple has already addressed this.
https://9to5mac.com/2016/12/06/apple-ai-researchers-can-publish/
A slide is no substitute for actual papers. Until then…
https://www.forbes.com/sites/aarontilley/2016/12/26/apple-publishes-…
Reminds me of the days when a math coprocessor as an optional add-on for the 8086 etc.
It’s more like the early MP3 decoders for computers. They were a separate chip since no processor (you could afford) had the power to decode MP3s in real-time.
Um… what? The MP3 format was created the same year the Pentium was released, and the Pentium was fast enough to play MP3s. By the same time MP3 saw any reasonable level of adoption, CPUs that were more than adequate to play MP3s were already quite cheap
You must be thinking of MP3 encoders – Creative Labs had a model of their SB Live card that had an on-board encoder, that could encode faster than real time (IIRC, even when the first Athlons were released, mp3 encoding time was still often used as a benchmark)
No, I meant decoders. I did say “YOU COULD AFFORD”. I don’t know about you, but I wasn’t about to spend more on my computer than my car at the time, so I didn’t get a Pentium, and neither did many others until the price dropped. Most folks were still using 386 and 486 processors when mp3 came out. Hell, I knew folks using the 386SX and even a few 286s. The Pentium was hideously expensive until the P2 came out and drove the price down.
Edited 2017-05-27 23:10 UTC
Except MP3 didn’t receive widespread adoption until several years AFTER the format was created. At that point, even the lowly Pentium 75MHz was dirt cheap (Hell, it was pretty cheap when my parents bought one in ’95), and it was powerful enough to play MP3s.
For comparison, the Diamond Rio PMP300 didn’t come out until late ’98, and Napster wasn’t released until ’99. Before that, there wasn’t much interest in MP3, and certainly not enough for dedicated decoder cards.
Are you sure you’re not thinking of MPEG cards for video? Those did MPEG audio, too, but were mostly for video.
I mean, I’m trying to find an example of a card with MP3 decoding, besides MPEG video cards, the only one I can find is one that was released in 2001 – well past the point where MP3 decoding was any sort of burden.
Do you have any examples of such hardware?
I recall separate MPEG decoders. I can’t remember MP3 cards.
Here’s one you can still get, but stuff like this was out for most low-end computers at the time. Your only other choice (if you had at least a 386 or 68030) was mp2, which almost beat out mp3 for the time because of its lower computational requirements.
http://amigakit.leamancomputing.com/catalog/product_info.php?produc…
That was first released in 2012. I’m still waiting for an example of something that people would have used for their 486’s or earlier, as you claim existed
Edited 2017-05-29 18:36 UTC
I said LIKE THAT, not that exactly. They’ve been making chips specifically to decode mp3 audio in hardware since the format came out, and nearly every line of low-end computer, from the C64 to PCs, had at least one or two of these things you could plug into it to listen to mp3s.
Come on, man. One example.
If these existed, you should be able to find an actual example of what you’re talking about, especially if it was something sold after the world wide web became a thing.
I mean, I put in an honest effort to find such a device, but the earliest example I could find was from 2001, and was only a minor feature on sound card that had a broader set of features, and 2001 is way, way after the point where people were using computers that couldn’t easily play MP3s.
Even a 486 could play MP3 no problem. I remember doing that, though encoding just a few songs had the computer working more than 12 hours, and the resulting 40Mbyte of MP3s was 25% of my diskspace and wouldn’t compress with Stacker (runtime disk compression for DOS).
Yeah, but that was ALL you could do. You needed a Pentium or a 68060 to do anything else other than play mp3s while playing an mp3.
That one must have passed me by.
Isolation consequences have
This is where it is going, because single chips are not getting much faster.
The general purpose nature of the CPU tends to using it more as a resources manager and have dedicated chips for specific functions.
For now, it will ease the implementation of AI derived methodologies/tools by having it residing on a separate chip. This allows the AI processing units to be optimized for the relevant data types and instruction structures. Later, one could easily see integration with the CPU die similarly to what has occurred for the math co-processor and the graphics co-processor.
Apple and Google are taking different paths for implementation according to their business models. This is a dedicated local chip for Apple for future devices to sell. In comparison, this is a dedicated “cloud” server for Google to attract traffics and sell more advertisements.
I’ve said it before, but I’m certain future CPUs will include large areas dedicated to FPGA such that they can be reprogrammed to run specialized routines extremely quickly. Fixed function coprocessors quickly become obsolete, but with an FPGA, developers will keep finding clever ways to reprogram them for years, even adding functionality that was unanticipated by manufacturers!
My hope you’re right. Local “Apple style” AI will bring that much closer. On it “perceiving” benefit, will move code to FPGA array.
Easy things, like “Paths” you use to travel each journal, camera settings you prefer for differing scenarios, etc. Could be handled Directly by small AI
Note, Draw and Photo taking and recalling can benefit a lot from small AI :O
On device agents starting to “seriously” guess your future wishes and needs, going to bring a new evolutive phase to UI :/
I think this is an idea many would like to see, but so far. FPGAs are still pretty slow in comparison to cost.
Lennie,
I know, I find myself very intrigued with possibilities for FPGAs, yet I can’t justify the hardware and licensing costs to acquire one for myself.
Perhaps we’re in the “chicken vs egg” stage, but I believe scales of economy will rapidly drive prices down as has happened with other integrated circuits in consumer electronics. It just makes sense for tech to evolve in this direction.
Seems kind of affordable:
“The $99 Arty Evaluation Kit enables a quick and easy jump start for embedded applications ranging from compute-intensive Linux based systems to light-weight microcontroller applications. Designed around the industry^aEURTMs best low-end performance per-watt Artix^A(R)-7 35T FPGA from Xilinx.”
https://www.xilinx.com/products/boards-and-kits.html
https://www.xilinx.com/products/boards-and-kits/arty.html
But maybe you still need software ?
Which needs a license.
Edited 2017-05-30 19:08 UTC
Lennie,
Nice find.
It interests me a great deal but these skills are nowhere close to the PHP & sql & business app clients I typically get. I don’t know if there’d be any demand for FPGA accelerated hosting services, haha, but I’d be happy to give it a shot if they’ll pay for it!
Edit: Obviously dedicated FPGA processors could be put to better use in other domains, but that’s the awesome thing about an FPGA, it can be re-purposed for all kinds of tasks.
Edited 2017-05-31 01:57 UTC
It truly is nothing like the others. Because the ones you mentioned are software development. Programming an FPGA is hardware design. This means understanding electronics and chip design. The languages they use are the same as used for generating hardware designs: verilog and vhdl. An example https://github.com/sergeykhbr/riscv_vhdl/tree/master/rocket_soc/rock…
Lennie,
I’ve read that they are making progress with FPGA C & opencl compilers.
https://www.altera.com/products/design-software/embedded-software-de…
I believe this will eventually replace VHDL for most “commodity” FPGA applications.
Yes, I know, but I would expect you still need to understand hardware design to understand it / do something really useful with it ?
Lennie,
I don’t know about today, but in the future scenario where FPGAs are bundled into commodity processors I expect most developers will be using high level languages rather than VHDL.
Although like you said in another post, software developments typically lag hardware developments, but even so I expect the challenges to be overcome eventually.
Let’s see what the silicon people come up with.
So far I’ve not seen for example Google choosing FPGA over GPU for AI. Their long term plan seems to be using maybe quantum computing for that.
I would speculate that Apple is interested in more vendor lock-in / control to increase the difficulty of hackintoshing and jailbreaking iOS and MacOS. They will provide a secure platform for Apple Pay, hardware locks and possibly APFS locks, as well.
This AI chip is NOT good news to me.
Is the lock-in in Google / Facebook / Amazon / Microsoft / <insert any cloud service you depend on> service any better?
If the Apple way can keep the data local instead of spilling it into the cloud, I’d rather go with that. So yes: it might be good news to get an alternative to whatever those companies riding the cloud money machine provide today – even if you end up paying more for Apple hardware.
Proprietary hardware – with the ability to uniquely identify you, your traits and transactions and the ability to shut down hardware – is not a good thing. It is naive to believe that it is just an “AI” processor to enhance battery life and userability. This not a question about whether Apple will be using cloud vs. local services – they are using both.
Pattern of first developing separate chip then integrating that technology into main CPU comes from physics of electronics.
You maxed out number of transistors You have in Your chip doing useful stuff.
Then You have this nice idea about something extra, so You can only add that something extra as a separate chip.
Then total practical number of transistors in a chip rise, and You can integrate Your chip. If fact that may be very good idea, since total of transistors rose, but not all of them can work at the same time (power budget is too small).
So adding that extra functionality to main chip is very good idea.
What is amazing here is that industry was able to pace itself in a way that this very pattern occurred for decades.
the amiga had some custom chips which gave it some lightyears of advantage over all other computers available these days. why does everybody think its bad to go that way? look at sound for example, the onboard-codec-stuff still is total crap while special soundcards are much much better.
Always needed local. Your hope also mine. Nothing excludes Apple from working AI at both extremes.
On pro-positive: Apple is at a right path, this time.
“…only to then bring it back into the main processor later on?”
The math Co-processor. Also remembering Phys, and Security Companions.
The few science headlines I have seen about low power neural chips, seem to be not silicon, binary tech.
So far modern AI research shows you need a lot of data to train these AI properly. You can’t do that on a just a small device.
So my prediction is, the type of features/products Google will deliver will be superior to what Apple will deliver for their customers.
Lennie,
A neural net needs to be trained with a lot of data, but once trained, it’s trivial to deploy a pre-computed neural net to a consumer device. For example, a driving car neural net could be trained with terrabytes worth of image/sensory data, but the neural net itself might only be million neurons.
Personally I prefer this approach of running things locally much better than in “the cloud”. A neural net running on the phone can handle extremely high bandwidths with very little latency. Algorithms running in the cloud requires some inherent compromises: reduced data fidelity/compression, lag, less reliable. Let’s not forget the incident when steve jobs didn’t have enough bandwidth to demo his own iphone, bandwidth intensive functionality isn’t ideal for the cloud, but is ideal for local neural nets.
So I think local neural nets will be beneficial for real-time interaction. That said, who knows if apple will actually do a good job or not.