A start-up called Movidis believes a 16-core chip originally designed for networking gear will be a ticket to success in the Linux server market. The Revolution x16 models from Movidis each use a single 500MHz or 600MHz Cavium Networks Octeon CN3860 chip.
Especially when they don’t bother to give any performance figures of an individual ‘core’, and simply try to sell their product using marketing fluff like ‘It has more cores than SUN!’
I’m going to head to my local fab, get them to sick 512 i286 cores on a single chip.
More cores? Yes.
Slightly more than useless? Yep.
—
I think they should stick to networking gear, or sell this as part of a piece of hardware, which is essentially a blackbox anyway, eg, a SAN controller or something. To try sell this as a general purpose hardware for server use, it’s not going to happen. Look at apples success
True, that and the fact that the software running on top will need to be MASSIVELY threaded, I mean, massively, I mean, insanely threaded.
There also needs to be a value added perspectice; if the have their own distribtuion, own middleware and consultancy services, then sure, i could see the company suriving, but being a pure hardware company isn’t going to work for the long run, given that Intel is boosting its multicore offerings, same going for AMD.
The alternative is this; port OpenSolaris accross to it, which is built to run on these insance sorts of configurations, work with Sun, licence Java, get that ported accross and licence the Sun middleare and rebrand them all uner the companies title.
Hmm “MASSIVELY” threaded, now where in the world would I find an app like that. Hmm.
Maybe some exotic ultra special purpose app like, say, Apache. Or that ubber special class of apps that no one uses called databases? Or *gasp* those two apps used TOGETHER!! Yeah no one would have a need for a machine like that.
So says the person who doesn’t have a clue about software; there is a difference between being threaded and actually threaded to a point where by it can efficiently run on a CPU like in the said article.
An application with 3 active threads could be considered ‘threaded’ but it sure as hell isn’t going to run efficiently on the above configuration, thats for sure!
An application with 3 active threads could be considered ‘threaded’ but it sure as hell isn’t going to run efficiently on the above configuration, thats for sure!
That’s why it’s targeted at the server market – they have loads of threads so an architecture like this is perfect for them. These threads spend most of their time waiting for data so a high clock rate isn’t a great deal of use.
There are already a few companies which sell processors which can handle lots of threads, Sun is just one of them, even IBM sell a version of POWER5+ targeted for this sort of work (low clock, 8 threads). Everyone else is working on them e.g. Intel are working on an 8 core chip with 4 threads per core (they are also said to be working on a 32 core device with “mini-cores”).
Edited 2006-08-14 20:55
Agreed. Without any proof its not a great deal more than just hype. It’s hard to see where this fits into as well. Is this meant to be a SME or large enterprise level device?
That aside, it wont take all that long for the larger firms to push these sorts of competing products out at which point the question of support and brand experience is raised.
You could go with a small start-up or go with a much larger firm, with greater support and history.
Seems this will be just like the numerous no-frills RAID devices out there.
> A start-up called Movidis
According to http://www.movidis.com/corp.htm they’ve been around since 2001. Does that really qualify as a start-up?
If the chip turns out to be a poor, or even anything less than amazing performer, it will be a failure. People are generally wary about buying expensive equipment from a new company.
If it turns out to be a great chip, the big chip companies will put out competing products and their superior experience, production volume, and reputation will destroy a startup like Movidis.
Look at what happened to Transmeta. They never made any money with their chips. The only reason they are still alive is because of their talent for hype attracted a lot of investors, and the fact that they ended up distancing themselves from the actual chip business.
Look at what happened to Transmeta. They never made any money with their chips. The only reason they are still alive is because of their talent for hype attracted a lot of investors, and the fact that they ended up distancing themselves from the actual chip business.
Well that, and actually licensing highly specialised and apparently usefull products to Microsoft and Intell.
What’s up with all the negativity? It sounds great to me. And for those people bitching about a lack of benchmarks, this sort of anouncement didn’t really need it. For those of us who have dealt with non-x86 architectures (yes there are plenty of architectures that will blow your core duo out of the water) 16x600Mhz MIPS cores is insane. If this takes off, I say good luck to both Intel and AMD in trying to built something that will beat this beast.
Agreed.
“The Octeon chips consume only 30 watts of power. The overall systems have networking acceleration, eight gigabit Ethernet ports, and hardware-based encryption abilities.”
While you might be able to beat it it terms of computational power you certainly cannot beat it with x86 in terms of power used.
If the x86 performs 50x better (computational power) than this chip, but consumes 80 watts, it beats Movidis’s revolutionary (hah) chip hands down in power consumption.
I don’t like Sun’s marketing team (but love Sun’s engineering team) – the one thing they did get right, however, was SWAP. Space, watts, AND performance. You can pick up a basic, functional processer that draws less than a watt. It’ll also perform thousands of times slower than a Woodcrest for instance. So yeah, it might suck less juice while it’s running, but it’s going to have to run a LONG time to finish the job. Ends up taking more power in the end, and being slower to boot!
Yeah, right. But also remember that for servers I/O is very important and I could see the number of processors boost this.
That being said I wonder which amount of RAM they’ll throw in because that’s quite important as well for servers…
Get a T1000/T2000 then. Decently fast cores, and a lot of them. Just lacking in FPU power which doesn’t matter for a lot of serving needs.
This thing is a bunch of slow cores (from what we can see) that cost a lot. No thanks.
I’m not an expert regarding server, but I’m convinced that the needs for a server are very different than the needs for a workstation. You seem to have the needs of a workstation in mind when thinking about servers.
As a first remark, a server will be powered on 24/7, and in that sense, there is no point in claiming that it needs to run longer than a Woodcrest to do the same thing regarding computational power, since it will be on 24/7 either way. So, if a slower server is sufficient to do its job and uses less power while doing it, then that’s clearly a good thing and will save you money on electricity bills (uses less power and likely needs less power to keep it cooled in the server room too).
Also, I said I’m not an expert, but I don’t believe that pure computational power is really the main concern in a server. For a simple example, let’s say you have a Pentium 200Mhz with a 100Mbit ethernet connection. Now, suppose it’s used as an ftp-server and only as that. Well, I’m fairly sure that this is sufficient power to fill the 100Mbit connection. Then I don’t see any point in using a dual-core Woodcrest 3.0Ghz for the same job on a 100Mbit connection, that would clearly be overkill. It all depends on what the server is used for of course. If you run a website where heavy php/ruby/perl code is run for each request to build the to-be-served webpage, you’ll need more computational power.
Still, I think that pure computational power is not that important. In my opinion, the speed of the cpu is fine, but the data speed between the different components of the system should be really good. I/O is very very important on a server. Things like memory access speed and what about bottlenecks with 16 cores accessing the same main ram? What about cache memory? Then access to the network and harddisks… To get an idea about the performance of a server, you need a complete picture of the whole system. The fact that the cpu ‘only’ runs at 600Mhz is not gonna make it a slow server at all.
Also, as RandomGuy mentioned, these systems have hardware-based encryption abilities… I assume this helps for things like SSL/ssh connections(?). In that case, that’s already some computational load that’s taken off the cpu itself.
In any case, I think it has a lot of potential. There is of course the fact that they aim for the non-light-weight server market and that companies might be reluctant to invest in something completely new from a relatively unknown company instead of sticking with the kind of servers they use from IBM, Sun, HP…. Can a company like Movidis for example give something like next-day on-site support or something like that? I think this might be more of a problem for Movidis than the technical quality of their product.
I think you need to look up my user profile and check out my website. I own an underground data center. I’m quite aware of the differences in servers vs. workstations. You’re right, highly parallel machines are good for servers, IF they are fast enough. That’s where the Sun T1000/T2000 shine.
This machine is probably dog slow, based on what limited information we have. Yes, servers are 24×7. I’m glad you figured that out. What happens when you have to run 20 of these suckers to equate to the performance of one T1000? Guess what – you’re paying MORE for electricity, higher management costs, etc. This is a low end system designed for network devices, and I’m sure it would be excellent for such things. It’s not a general purpose setup though, and it’s not going to do well in anything that requires any kind of decent processing power. If all you want to do is serve static html, processor count isn’t the issue anyways – it’s memory and secondary storage/network. You can saturate a gigabit network with static html served by a p3+ (maybe even a p2) assuming you’ve got the memory to handle all the processes.
Good try though.
” but I don’t believe that pure computational power is really the main concern in a server.”
Actually this is not true except if you choose your server for specific tasks like : DHCP server, FTP server, file server, print server,… But if you are gonna use Remote desktops via terminal services or support a streaming server or run a database for a bank or other financial institution then CPU is the only bottleneck in your system, and the more CPU the better the problem solved.
Of course to understand what I mean you have to be infront of a server with whatever OS and check or monitor performance (MS viewperf.msc or apple’s xserve “server monitor” or “gnome-system-monitor” on linux) then you will understand what is the bottleneck in your system for the given services you run on that server.
If your bottleneck is with CPU then you are in trouble because you have to buy another more powerful one and ditch the weak one, unless of course you intend to convert this old server to a node in a cooperative cluster or combine its power and other newer ones power to a grid software like in OSX Server 10.4 xgrid.
While if your bottleneck is with RAM, all you have to do is add more RAM; and If your HDD subsystem is the problem then you can upgrade your drives from PATA to SATA then to RAID0 or better RAID 5, and if that’s not enough then you can upgrade to SCSI drives with iSCSI channel connectors or 2Gpbs ethernet adapter connected external NAS.
So finally I wish to stress on the fact that not all a servers’ services are demanding, only some of them, and to feel that you need to experience it yourself through a monitoring software to understand how big is the problem and finally guide you to the best way to solve it.
I am still waiting for 64 core processor, which might pump the PC speed real fast.
Current CPU technologies tend to double in performance in many years, while in the past the shift from Intel 4004 to intel 8008 produced 10x increase in speed. I have to see something like this. Intel current jump from Pentium D/Extreme to Core or Core 2 didn’t achieve more than 3x speed in the best situation (not in floatation point calculation, but in integer calculations).
If you heavily use your CPU then you will notice the speed difference of 3x but you will notice more the cash $ difference even more, so to make your realistic purchase decision you must be upgrading from an old CPU to feel the difference more.
So if you have purchased your CPU 1-2 years ago it’s not wise to upgrade, but if your CPU is 4-6 years old then diffinetly it would be wise to upgrade to Core 2.
Finally, I would say that 16 cores are not enough, we need more, we even need Optical based CPU that operate by photos in which 1 Core will equal 128 copper based Cores (compare light speed vs electron speed in copper under manditory limited voltage), and one GigaPulse would equal 1024 PetaHertz.
Hmm… and what are you gonna use all that power for ?
Are you talking about a server or a workstation or a regular desktop computer ?
In any case, about “I am still waiting for 64 core processor, which might pump the PC speed real fast”, I’d say that I don’t see a 64 core cpu make the desktop/workstation computer faster. The performance gain from going from 1 to 2 cores is pretty big (noticeable during regular desktop usage). The performance gain in going from 2 to 4 cores will be less, but still pretty big depending on the type of things you use your computer for (noticeable on a workstation, but less noticeable during regular desktop usage). In my opinion, 8 cores or more is only really useful for servers.
Yes, one could reason that today many applications are not making optimal use of threads and multiple cores. That’s right, but it’s not an unlimited resource: you simply can’t say “oh, we got 64 cores now, so let’s split the program up in 64 threads so we can make full use of those cores”. That’s simply not possible with most applications.
“Hmm… and what are you gonna use all that power for ?”
1. Well, we might start to use voice command and voice technology more reliably. Imagine how nice for your grandpa to say to the computer “computer, tell me the weather for today, then tell me the time the bus 33 arrives on station 47, then record my TV program show Dr. Phill then backup the PC then shut down, then back on at 7:00PM when I arrive; now execute”
2. We could play games much more better than today, 10 cores for Physics of Game, 10 cores for the Artificail Intellegence (AI) and 10 cores for Special effects and 4 cores for spacial 3D sound, and 40 cores for graphics draw, and….
then your game might be called realistic video game, rather than sketches like frame game
3. I can run vmware and emulate an office of 4 workstations, now if you install 2 linux OSs virtualized on vmware with HT enabled or core enabled to study networking between them the system will be unusable because CPU will be @ 100%; besides all that I would be able to emulate an OS and play a game with real performance like a real system.
4. I can support 20 terminal services running concurrently on my server without CPU crying mama..mama…..
5. Other wishes and other people wishes to be filled in here __________.
There will always be a need for speed on many fronts in human life because life is short and time is money.
“2. We could play games much more better than today”
Hehe, let me remind you that the resolution of your eyes is limited and you might finally want to upgrade them.
Or get even faster response times by putting a plug right into your head… and hope the matrix isn’t running on Windows
Another reason for using powerful pcs is sloppy coding which will probably get worse even as JIT-compilers are built and improved because people will then go and build yet another layer of very high level languages on top of it so that one single line can keep an entire cluster busy for days. Hey, reminds me of storys of pain and glory, punchcards and FORTRAN.
Seems like “some things never change”
As a matter of fact, Joe Sixpack will always want the fastest system out there if he can afford it. No matter if he can put the power to good use or not. And be it only for showing off and boosting his ego. He _will_ buy it.
The real benefit here is for applications like serving web pages where often there has to be a thread/process forked to run a particular application. In these types of jobs the more cores really starts to count. in many cases if you look at a Sun T2000 vs an intel or opteron based server the overall throughput in serving pages is brilliant, take that with the low power usage of the system and it becomes a very good value server. I know of several large web sites which have moved from Dual core XEON servers over to Sun T2000s and halved the rackspace they required to drive the service while still maintaining the quality of service they required while cutting the power costs. Now that the company has more rackspace to play with they can expand much more effectively. In theory if they have 16 good cores with a nice bus design they can really make inroads into the architecture agnostic environments where just serving web/data/images/porn is important at a good price point.
Unusual.. What is so unusual about MIPS. Lets look at just about all Linksys/Netgear routers not to mention other small embedded appliances they are almost always using MIPS based ARM/TI processors.
No such thing as a “MIPS based ARM/TI processor” I’m affraid.
MIPS is one processor architecture, ARM is a completely different architecture.
Yes you are correct. I misread my /proc/cpuinfo on ARM based appliance where it says
CPU architecture: 5TE/MIPS
But when I do a
uname -a
Linux sxxx 2.6.16 #1 PREEMPT Thu Jun 8 23:38:13 PDT 2006 armv5tel unknown unknown GNU/Linux
So as usual we have a raft of “they’re doomed” and “can’t beat x86” comments.
History has indeed shown that it’s incredibly difficult for alternate processor architectures to succeed. This is most certainly the case when you look from the perspective of the desktop computer market. When looking at servers things are far less clear.
It is true that for a desktop computer 32 cores right now really isn’t going to give a great deal of benefit. If you are looking at a web application server that needs to deal with a high number of simultaneous connections a server like this is a god-send. The numbers suggest to me that performance of this box should be double a similarly priced x86-based box.
I wish these guys the best of luck.
Very well said, hraq. Saying “we don’t need this much power” etc. is incredibly short-sighted and ignorant of the needs of other people. There will *never* be enough CPU power for some tasks.
Edited 2006-08-14 10:41
Would you bet your reputation on a 5 year old company that is relatively small? Even if they have built a better mouse trap? Worst case, you buy and they go out of business. Either from lack of customers or (if it really is that much better) from competition from the big players. Best case, they get bought out by someone with deep pockets. If Movidis really believes in their product they should give it away to a few high-profile customers and invite reviews.
Edited 2006-08-14 11:43
There’s still the memory wall. Cell tries to work around this. I think we need new languages/runtimes to truly unlock the potential of multicore. The old languages and systems are a mix of too much and too little. I’d rather have a multitude of simple cores than a single behemoth. We’ll reach a complexity wall eventually.
The Memory Wall is something of an artificial problem resulting from designing processors almost entirely separate from main memory design, since the processor and DRAM industries are about as far as you can get in processes, architecture, location.
Design a memory system that can sustain 1 full random access per memory bus clock over the entire address space, and the processor design falls right into place without too much complexity (any old instruction set can work). You end up with a rather large no of threads though, about a minimum of 40 or so. That would be called a Thread Wall. Alot of people know what to do with lots of threads but it seems most don’t care for it. I’d take the slower threads with no memory stalls anyday over 1+ thread with all too frequent deadly stalls.
FWIW the SRAM like RLDRAM has around 20x more random throughput than conventional DDR DRAM but we are stuck with a crippled 25 year old DRAM architecture that was designed for lowest possible packaging cost when DRAM pin count is no longer a problem (a qwerty DRAM essentially). DRAMs today only perform random cycles about 2x as fast as 1986 but have more far burst I/O bandwidth while processor clocks are about 1000x faster. So why do we still use multiplexed DRAM with very poor banking? RLDRAM is about as close to SRAM in speed at DRAM prices as you can get, but you get those nasty threads.
see RLDRAM (Micron), CSP, occam etc
SRAM consumes more power than DRAM. So much so that it wouldn’t be pratctical in a server. That’s why there’s such a “memory wall” between the DRAMs and caches. Server-grade computers typically have large amounts of level two cache which is made of SRAM which makes up for most of the problems with the memory wall.
You think a chip designer in the industry for 30yrs doesn’t know that.
Look up what RLDRAMs can do, I already said it before. 1 full random access every 2.5ns (ignoring bank collisions). RLDRAM only uses an SRAM like interface scheme while still using std DRAM fab process. It costs 2x as much as regular DRAM since it uses faster concurrent banks to get 8 cycles every 20ns for about 20x more real throughput. Conventional DRAMs are artificially slowed by a 30yr old muxed interface and poor banking model and the rest of the crud in single threaded processor design.
An all to frequent miss on the TLBs gives me 300ns random access. The TLB only has 256 associative ways, if I walk a tree or graph with millions of unordered nodes, I am doomed. Ofcourse if I stick to simple cache fitting linear memory programming I can get far more performance.
With a processor and memory both based on threads, I could hop nodes and hash at will with small effective memory wall but I would have to deal with many slower threads. The Memory Wall can be replaced by a Thread Wall.
the sub thread title ought to be “unlocking Memory”
Hmm, sounds really interesting.
I read a few sites on the topic but I somehow fail to understand how an improved addressing scheme could force anybody to use slower threads.
Could you please explain this in greater detail or point me to a site where it’s explained? That would be great!
google<R16 Transputer wotug> and <RLDRAM CSP occam>,
Those take you to a paper on a processor design that exploits Threads in processor and memory to knock out the Memory Wall and a no of posts to various groups.
The idea is essentially quite simple
Slow the cpu down and speed up the memory till they match close enough that ld/st/br are just so so slower than register opcodes (about 3x) for all addresses.
RLDRAM barely satisfies the memory side and gives indpendant 8 memory cycles over 8 banks limited by the interface bus currently at 400MHz going to 533MHz.
Processor Element slow down is done by 4 way multithreading very similar to Niagara. 10 PE cores now give around 1000-1500 int mips but over 40 or so threads so each thread is like a 25-35 mips engine.
Such Processor Elements are very cheap and do not need OoO or SS design, R16 only uses 500 FFs for a small Arm like ISA. Actually any ISA can be used even x86 at more cost. Since PEs are cheap, the performance is high for the logic used. The main idea is to use the PEs to load up the MMU, it is okay for PEs to go idle, they are cheap.
The paper describes how this all works in an FPGA design. The key is really the MMU which uses hashing in an inverted page type of search. As long as the memory is not excessively full, the MMU effectively has full associativity over the DRAM address space (unlike your typical TLB).
All banks must all be equally loaded for this to work and they must be concurrently operated. The MMU resolves bank collisions by reordering threads as needed sorting on 3b of the physical address.
This can be speeded up in an on chip smaller SRAM version to run the MMU bus at current clock speeds. Start with a 1Mbyte L1 cache split into M banks with N of those working concurrently over L cycles. M should be many times N to reduce bank collisions and idle banks. N should be > L so that the MMU can keep N issues in flight. Ofcourse no single threaded processor could ever keep N issues in flight but 40 or so threads on 10 PEs can keep such a memory quite busy. Relatively small caches are still used for register files and I queues.
This isn’t the 1st processor to work like this but it may well be the 1st practical design with off the shelf parts. The Threads must be explicitly programmed with something like an occam or CSP based Par C using message passing objects. The paper only describes the hardware side of the work, much remains to be done.
In DSP speak, its all just commutation.
I’m still wondering what all those x86_64 servers they keep selling out there are serving since there is no 64 bit servlet api yet.
This won’t work if you look at it from typical general-purpose deployment scenarios. Straying away from x86 support pretty much condems it from ever being supported commercially by standard ISVs. I doubt the market for roll-your-own software customers is large enough, particularly considering the expense of supporting a general purpose distro, to recoup any investment.
But, when you take a software platform optimized and proven in the server room, and combine it with a hardware platform designed for and optimized for specialized network requirements, there has to be some opportunity there.
For instance, most of the leading enterprise class security applications (firewall, content filtering, ips etc.) run on linux but often run into performance bottlenecks on standard x86 deployments without resorting to expensive accelerator cards etc. These customers are often more concerned with performance and resilliency than bottom line price, so port those linux applications to a specialized platform that can offer a decent price/performance mix and you have potential.
There’s also an increasing tendency towards collapsing multiple security applications onto a single platform, which is something a multi-core architecture could excel at if the applications are ported and designed correctly.
Of course, it’s not that simple, there’s more work that would have to go into providing an acceptable support infrastructure around that. But the potential is there, I suspect these guys would have better luck if they targeted ISV’s and vendors to partner and position their product as a networking appliance rather than targeting the general market. Share the development costs and leverage those vendors brand names to offer a turn-key solution for customers.
Doesn’t sound like that’s necessarily what they’re trying to do though, and I suspect the shortcomings of a custom architecture won’t overcome any benefits for the typical customer in general purpose deployments.
Just my 2c…
I believe the MIPS architecture was originally developed by MIPS Computer Systems. SGI then later took interest and decided to use MIPS processors in their systems. It was then that SGI purchased MIPS Computer Systems and changed the name to MIPS Technologies Inc. at which point they continued to develop MIPS technology for SGI.
I believe the MIPS architecture was originally developed by MIPS Computer Systems. SGI then later took interest and decided to use MIPS processors in their systems. It was then that SGI purchased MIPS Computer Systems and changed the name to MIPS Technologies Inc. at which point they continued to develop MIPS technology for SGI.
Having been a sysadmin of a unix machine (1989) from “MIPS Technologies” which ran a R2000. It did not have any SGI badges on it. Actually if you do a bit of searching, the MIPS architechure came from a Stanford University project in 1987. http://burks.bton.ac.uk/burks/foldoc/31/74.htm
Folks, i really can’t stand this anymore. All i’m waiting for is the question, how many frames per second (insert your favourite 3-D App/Game here) will it do? All you’d had to do is to cut&paste Octeon CN3860 into your favored search engine and got all specifications about the processor you are dumbing down so much. It’s not that this company actually built the cpu, they just integrated a system around it.
As to the CPUs specifications, it really doesn’t seem that slow. I’d really like to have that sort of thing in a laptop with something which is comparable in performance to integrated graphic cores. It’s more than fast enough.
Edited 2006-08-14 18:24
make some “REALLY” cheap boxes for everybody 4core@500MHz for 400$ and sell to users to see what is going on. Shooting at highend is insane at least. Make them available to us in order to have our hands on it. This is the best that can be done. Period …
Hey, this sounds exactly like what the Chinese are planning to do with their next Godson implementations. Also http://www.yellowsheepriver.org looks interesting, they meanwhile have updated their specs, though no mention of intended marketprice for that. Neither about commercial availability. As far as i recall in general about their ..uhm.. “mips-ripoff” which it really doesn’t seem to be, because it’s realized totally different internally, and the instruction set is so widespread, they plan do have a 4 core Godson rather soon. Very interesting to watch that marketsegment move, since BLX IC Design partnered with AMD in the development of Thinclients, AMD sold anything MIPS recently, and some folks involved with that all founded or will found something new. Even more interesting would be the possibility to buy such a thing for cheap, as you mentioned. I imagine something which is as fast as something like a PIII@1GHz, would have something like Vias Padlockengine on board, and would come in Mini- or NanoITX formfactor, but so far…no go.
Edited 2006-08-14 22:14
…with regards to the dissing of yadda, yadda, not Wintelcompatible, blah blub. Though it’s only singlecore and Mips32 only, look at what MIPS actually CAN do. There is an DSL-Router called “Fritz!box” with all sorts of nice goodies available, it has options for WLAN/VOIP/ISDN/POTS/USB, and up to 4 Fast Ethernetports. What does it cost? Short under 200 EUR.
http://www.avm.de/en/index.php3?Produkte/FRITZBox/index.html
What does it run? Linux ftp://ftp.avm.de/develper/opensrc/
What CPU it is using? This: http://focus.ti.com/general/docs/bcg/bcggencontent.tsp?templateId=6…
So, tell me, why in hell, i as a homeuser should waste watts with some decommissioned Wintelbox and play with Asterisk f.e. when i can have this?
Hmmm theres no mentioning of any hardware virtualization support. That could make it very usefull, not every server in this world need to a powerhouse.
(Also wouldn’t it be neat if you could shut down unused cores ?)