I just emerged, blinking, from the world of Skyrim, only to realise Sun Oracle has released the 11th version of Solaris (well, technically it’s the 7th, but okay, we’ll roll with it). I’ll be honest and upfront about it: Solaris is totally out of my league, and as such, it’s very hard for me to properly summarise what this release is all about, so I won’t even try.
The good thing is, for mere mortals such as myself, that Oracle provides a nice bullet list of improvements in Solaris 11. In addition, there are several people on the web who do actually understand all this stuff and who summarise it neatly for peasants such as myself.
The big focus seems to be two technologies-turned-buzzwords: the cloud (in normal-speak: the internet) and virtualisation. “Oracle Solaris 11 is the most significant operating system release of the past decade. With built-in server, storage and now, network virtualization, Oracle Solaris 11 delivers the industry’s first cloud OS,” said John Fowler, executive vice president at Oracle, “Customers can simplify their enterprise deployments, drive up utilization of their data center assets, and run Oracle and other enterprise applications faster all within a secure, scalable cloud or traditional enterprise environment.”
Solaris runs on both SPARC and x86, but my guess is that if you use this stuff on a regular basis, you’re probably already aware of all this and are already mulling over implementing Solaris 11 within your organisation.
Nevada has been Solaris 11 for so long I almost forgot it would one day be an actual release.
Yeah, I’ve been running on Solaris 11 Express snv_151 since last winter and and updated to 151a this summer. It’s all been great as a preview and I can’t wait to get some services on this.
Of most interest is crossbow. On older released we had to do funky things with provisioning zones based on the network they live on versus having the global doll out zones and connect them to separate VLANs. The reason this was a problem in older releases was static routes in the global.
With deduplication, there are opportunities for backup solutions too.
Edited 2011-11-11 12:05 UTC
Not really. Solaris 11 is not based on Nevada. It’s based on Indiana! It’s a much more radical change from the classic Solaris than you would imagine and much closer to a GNU/Linux distro in some aspects. It still has the Solaris goodies that no Linux distro has.
And this is the first comment?. I guess that shows how much intrest there is in a new version of the now closed source Solaris.
Pity Sun chose an intentionally GPL incompatible license for Solaris, it really could have been something if there was only some more intrest in it from cross pollination with Linux.
I guess Solaris is still free for non comercial use, or something to that extent, though.
Yeah it is.
I did use Solaris 10 Express for bits and pieces. Before moving most of the servers to Scientific Linux or OpenBSD.
I remember Zones being the big thing Circa 2006.
I think Solaris is too much “proper unix” … I remember it being harder to use.
With GNU userland it’s as hard as Linux or *BSD to use.
Doing specific things on Solaris is very different than Linux and BSD stuff … It hard to explain IMO Solaris is as different to Linux as Windows is different to BSD … massively.
It really isn’t.
I administrate a mixture of Linux and Solaris boxes and while there are a few quirks for Solaris I have to remember (eg vfstab instead of fstab, user space commands have slightly different switches, etc), but largely the skill sets are transferable (unlike building a Windows Server and then expecting the configuration of OpenBSD or FreeBSD to be similar).
In fact, you can configure (via shells, aliases or even compiling in your own user space tools) the two environments to behave more alike if you really struggle when switching platforms. Where as it’s not really possible to do that with Win/BSD. While you can get a POSIX environment for Windows (cygwin) and slap a GUI on BSD, Windows and BSD are as different as night and day.
That is the problem that they are so so similar I used to get confused … especially in the labs when we were running Sun Ray thin clients that had a mixture of Redhat and Solaris.
I personally find Linux to be easier.
Linux is easier for some things, Solaris is easier for others.
It all depends on what you’re trying to achieve and what your skill sets are.
I fail to see what’s so different. To me, Solaris, Linux, MacOS X, FreeBSD,… are quite similar. Solaris is extremely well documented. It should not be a problem to manage for somebody with basic unix skills from some other unixlike environment.
I use Solaris since v8 came out. In general it’s not that hard to move between the Linux and Solaris, certainly not as hard as moving from Linux to default AIX. And if you use Nexenta then it’s really not that different.
But then there’s hardware configuration, firewall, zones and other lower level stuff that is massively different.
GPLv2 is incompatible with GPLv3. The GPL license is a joke.
I have no idea what that’s got to do with Sun creating yet another incompatible license, but, whatever.
No it isn’t, what’s this nonsense?
Considering GPL is incompatible with pretty much everything, you shouldn’t really hold that against Sun
Thom you obviously don’t understand what the “cloud” is.
The internet isn’t the cloud, the cloud it is more like the time sharing stuff of the 70s.
It is more “pay to do stuff” on someone else’s computer network.
Edited 2011-11-10 21:09 UTC
That is arguable. I don’t like the “Cloud” word used in this context. “Services available through Internet” sounds more accurate, more buzz free and more “back to the ground”.
When someone says “Cloud” I always remember Web 2.0 and AJAX and other artificially created marketing/press terms calling something already existing but with a more sophisticated name.
It is a bit of a buzzword however what it means is quite well defined IMO.
Web 2.0 maybe (TBH I think everyone accepts that WEB 2.0 means user generated content) … AJAX no … While there was other ways of updating the page from another server they were extremely hackish.
You’re right – the could is anything but services available over the internet.
Just like Web 2.0 and AJAX, they’re just a shorter version of what’s changed. there were many buzzwords, but Web 2.0 and AJAX stood the scrutiny of time. The cloud is also over 5 years old now(arguably, starting with Amazon’s EC2).
XHR (sometimes known as AJAX) had options for “updating the page from another server” only via a vulnerability type called Cross-Site Scripting. It’s still very hackish to get data into one page from one domain from another domain.
There are other ways … but usually I setup a handler server side (same domain) … the other method I think you can use an iFrame (I think Twitter and Facebook Widgets do this).
I use the iFrame and postMessage. While already am looking at cross domain resources.
The definition of the cloud is quite murky and ill defined.
From a consumer’s perspective ( drop box, google docs, icloud) are ” the cloud”. How does anyone know that this “cloud” isn’t just a single server? How is it different from the internet in this case?
From a service provider’s stand point: architecture with high scalability and availability.
How about – omniavailalbe services? (From the consumer side, obviously)
gmail, hotmail and wordpress could fall in such category too; and they are not marketed as cloud things.
LOLWOT?
WordPress is a CMS. Gmail and Hotmail are free email accounts.
Dropbox is definitely Cloud.
The cloud is not the same as web hosting or a separate server. You app/data sits somewhere and it just works …
Edited 2011-11-10 23:49 UTC
Actually I think webmail is the perfect example of “cloud computing”.
They hold your e-mails and attachments. They hold your contacts, calendar schedule and many are even used as a passport to other services (eg: G+, Android app store, and real time communication tools like MSN and Google Talk). Plus all of this data is held centrally, processed centrally and databased to around individual accounts, again on a centralised set of servers. All of this data is then available on countless devices from fat PCs to thin clients which just displays a user interface to the services provided for individuals by the centralised servers.
It’s pretty much the definition of cloud computing in my opinion.
However, the very fact that two professionals in computing can have such vastly different opinions on what cloud computing is just re-enforces why I hate the term so much. I think it’s an utterly retarded word invented by marketing managers to promote the same old shit that engineers and programmers have been developing every year and pretty much since the days of TSS.
The cloud and cloud computing are both a result of engineering lingo. There was not much marketing behind the name. I surely used a symbol of the cloud to denote abstract services available remotely over a third party network at least 10 years ago.
I see.
Thank you for the correction
You mean like… omni-consumer products, erm services?
For a more pictural approach to proper IT terminology, please consult this source of wisdom:
http://xkcd.com/908/
No, really, I’m not trying to give a better explaination what “the cloud” is, I just like to poke fun at it now.
http://en.wikipedia.org/wiki/Cloud_computing
I think that is the important difference.
Ok, so my co-located server == cloud? End users don’t have to care where my website (er, I mean “service”) is coming from.
From my point of view, “Cloud” is as simple as “Someone else worries about your infrastructure”. Then again I may be biased, as I work for HP Cloud Services, and I’m one of the someone’s who has to worry about it for you…
Well some people on here think WordPress is part of the cloud.
There’s the WordPress hosted service, which you could think of as “cloud” and there’s the WordPress software, which is software. So everybody’s right!
You don’t necessarily need to pay (will not with money at least) — you are paying with your personal data being exposed.
…but ultimately doomed I think, no matter what buzz word du jour it uses. I’ve so many customers jumping ship due astronomical license fee hikes, uncertain future and having bought expensive Sparc hardware three years ago, being expressly (see what I did there?) told it would run Solaris 11 when it came out only then for Oracle to turn around and tell them the hardware can be no older than two years for it to be supported.
I like Solaris, though it has a whole tone of baggage left over from 10 years ago (like root’s default home directory being / and sh still being the default shell even though it’s no longer a statically linked binary). I just wish Oracle hadn’t gotten their grubby mits on it or that Sun had used a BSD license or something.
Now there are many issues about this S11 thing, but this comment calls for a quick fact check.
Root’s home in S11 is /root/ (not that I’d get how does that define modernity of an OS) and while there still is /bin/sh (/bin and /sbin being links to /usr/{s,}bin/), it actually runs /usr/bin/ksh93 (and again not sure why would that necessarily be better) esp. it’s not a big deal to choose shell of one’s liking.
One of my complains about this release would be that development had a lot of the F/OSS vibe and was happening/focusing on engineers desktops. So in fact running it on SPARC for the first time in Solaris’ history might see some shortcomings here and there.
Just installed it and yes, roots default directory is in /root and the new default shell is bash. Quite a departure I must say.
The reason why / is a bad idea these days for root’s home directory is because it mixes root’s user files and directories with those of the system. This wasn’t an issue way back when you had one shell, one profile for that shell and no ssh (to name the first things I can think of off of the top of my head). Mow that you do, those config files and directories end up directly on the / filesystem making it harder to administer and even though you really should keep root usage to a minimum it’s just not always the case. Hence why most admins create a /root directory first thing after an install of Sol8-9-10 and change root’s home directory.
Are you seriously trying to say to me that sh is just as good as ksh or bash? Please. As for it running /usr/bin/ksh93 in sh mode, all the more reason not to bother with it.
Question was, how would this be a sign of modernity (or not towing old baggage). But speaking of bad ideas, generally logging (and even more so directly ssh’ing) in as root does not seem to be the best one of them all (as you’ve correctly pointed out). Esp. true on Solaris (10+) which tends to utilize RBAC/sudo, clinging to root’s usage might by some be considered a bit archaic.
I am seriously unaware of making any comparison or qualifying statement.
EDIT: The word “better” could have been misunderstood. The question was why should it be better for /bin/sh to be symlink to /usr/bin/ksh93, presuming that having /bin/sh as SH binary does not effect /usr/bin/ksh93 or /usr/bin/bash or /usr/bin/zsh for anyone who wants to use either of the Bourne shell family.
Merely, as long as you can choose your favorite shell to log in and to run your scripts, I do not really understand why presence of another one should be any bothersome. I personally do not care for (t)csh too much, but see no reason to make a point of any system shipping it (it sure lives on my machine) or even making it default when changing is just one command operation.
Edited 2011-11-11 00:25 UTC
RBAC/sudo are great but don’t cover all situations. For instance, I’m currently working a one month contract in which they’re running all their production DBs on Solaris zones. Now, I’m the only UNIX guy here so for them to be using Solaris in the first place is a tad daft if you ask me. Anyway, the systems I’m talking about are two M4000’s in two separat DCs with one Global zone a piece. I’ve spent a week writting a KSH script that allows us to failover one or all zones residing on one system to the other. Due to them not using Sun Clustering, it’s got to be done via ssh and as these people don’t have a clue about UNIX, I’ve got to make it as simple as possible for them. Passwordless ssh login for root is the only way to achive that. You seem like a knowlegable person therefore I’m sure you understand the situation with the .ssh directory. Now, call me a fickle person but I’d rather that directory did not reside on /.
Anyway, I spend the first week here cleaning up root’s files on / and putting them in /root, among other things, and while checking out /etc/profile, I found some nice little additions by the dude from Sun that installed the servers, like set -o vi. Problem was, the guy didn’t remember to change root’s default shell to KSH so that option was as usefull as a bicycle to a fish.
In a large environment I’d agree with you but when you’ve three servers (one production, one testing and one development) using root is no biggy and much easier to deal with then implementing RBAC/sudo.
My bad and fair ennough
My point though was that the only reason why sh was the default shell was for historical reasons. Systems used to run /usr on either a separat partition or even an NFS share. If you needed to reboot in single user mode, you wouldn’t have access to this directory and therefore no access to anything but statically linked binaries. As HDs are a tad larger these days, /usr usually resides on the same slice as /. As you so rightly pointed out, today sh points to /usr/bin/ksh so for Solaris to still us sh as the default shell can only be for historical reasons. Surely you can see that, no? I’m not saying that sh residing on the system (even though in this case it’s only as KSH in sh mode) is a bad thing, far from it. There are still a mirade of scripts written in sh so I would expect there to be support for the shell.
Furthermore, the changing of the default shell to BASH and root’s default directory to /root in Solaris 11 tend to lend quite a bit of weight to my arguments, don’t you think?
Not even close to the truth. Solaris 11 is based on OpenSolaris. It has no more baggage. The root directory and the default shell have finally been updated as have many others. If you need to run Solaris 8, 9 or 10 software, you need a Solaris 8,9 or 10 zone, as you usually can’t have the software running in the global zone due to the huge amount of changes.
The license fees are high for an OS alone, but considering the whole stack, they are quite reasonable. If you need an iPlanet, Oracle DB and a few other things, their license cost is 25% on Sun Hardware Solaris compared to others. The whole package is much cheaper on Solaris and Sun Hardware, even if the OS isn’t.
If, on the other hand, you don’t need the Oracle software stack, it’s going to be cheaper to stay away from Oracle hardware and Solaris.
Most enterprises I worked with actually can’t live without Oracle Virtual Directory, iPlanet, Weblogic, Oracle DB, and other Oracle components so it makes sense for them.
Right now only Oracle, IBM and Microsoft offer complete software stacks (LDAP, Messaging, SSO, IdM, DB, Web, etc.). Assuming you go the Java way it’s Oracle and IBM. Oracle software at least implements industry standard protocols, unlike IBM. Try migrating from any Lotus or Tivoli product to something else. That’s the reason for which businesses stay with Oracle. Their software implements the newest technologies, and is generally standards based or standards setting.
Businesses unfortunately only talk .NET and Java and don’t care about Python and other stacks. I actually see Ruby and Python as real alternatives to .NET and Java, but the rest of the components of the stack are missing or inconsistent.
I wonder if they now would hold to their promise and release the source, so illumos and OpenIndiana could benefit from the latest improvements.
Edited 2011-11-10 21:56 UTC
Finally a comment worth replying to. Yes, we will see and I’m hopeful Oracle will stand by its claim. This release also has implications on further zfs releases too. At least get zpool version 31 released so it can be imported to illumos.
Yes, potential ZFS release will also affect FreeBSD and etc.
A lot of it is “cloudwashing”:
Verb: “Taking your product and contriving some kind of relevance to the blah blah cloud”.
Solaris used to be the shit, but it became a basket case after the dotcom crash of 2000, especially with respect to x86.
Now, few people care. Even when it was hastily (and GPL-incompatibly) released as open source, it wasn’t really embraced.
The greatest thing that came out of Solaris was ZFS, but its usefulness is limited by the fact that it was never GPL compatible, and now it’s owned by Oracle (which a lot of IT shops avoid like the plague).
It’s got an interesting future in FreeBSD, but again hampered by the lack of some awesome ZFS features (like encryption) that aren’t in the FreeBSD release yet.
Other very nice Solaris features:
– RBAC (only IBM AIX can compete here)
– ZONES (FreeBSD Jails still lag here a lot)
– SMF ^aEUR“ systemd can only dream about it
– CROSSBOW – network stack virtualization
– BOOT ENVIRONMENTS – create one, delete everything (rm -rf /*) and You still have fully working system
These are features not seen in any other OS, not only UNIX …
FreeBSD provides encryption since quite long time, its just not ZFS feature on FreeBSD, its GEOM feature, encrypt the devices (with key or passward) using GELI and then create ZFS pool from encrypted devices, voila!
Edited 2011-11-10 23:37 UTC
Eh, most of those aren’t really that interesting. Zoning/Containers have been around for a while, haven’t seen that great of a use case. Virtualization on a Type 1 hypervisor has proven much more useful in a general sense. Boot environments are neat, but it’s only in Solaris, so not of any use to me. Not a feature that’s going to get me to drop Linux.
The last time I tried (about a year ago) to encrypt devices with GELI and then take those devices and add them to a ZFS pool, the performance was absolutely terrible. There were weird pauses, wildly inconsistent pauses in throughput, huge latencies, and overall a disaster.
So unless it’s native encryption in the file system, I’m not counting it as a feature.
It is not interesting because you dont know about this stuff. Other people are excited.
For instance, DTrace is unique and there has not been any tech like that before earlier. Linux devs have even switched to Solaris just to get DTrace. That is why IBM is copying DTrace and calling it ProbeVue. FreeBSD and Mac OS X has ported DTrace. Linux is copying it, but Systemtap is a very bad and unstable copy, according to DTrace experts.
ZFS protects your data on disk, against data corruption. No Linux filesystem does this. Your data might be corrupted, and Linux will not even notice it. This is proven by comp sci researchers. You want to read their research on this?
Boot Environments is a killer feature. If you patch your Linux installation, and something breaks, what do you do? Reinstall everything? With ZFS BE I just reboot into GRUB and choose an earlier functioning snapshot. Almost zero downtime. I have often done weird stuff while learning Solaris, and broke something. Instead of reinstalling everything, I just reboot and kill the latest snapshot which broke my install. This takes a couple of seconds, and I am back to a real functioning state. Killer feature.
Containers are really really neat. Lot of sys admins are excited about this. If you use VMware and start up several OS instances, then each guest OS will use 4GB RAM, and CPU, etc. With Containers, each guest will use 40MB and nil CPU. One guy booted 1.000 Containers on a server with 1GB RAM, it was extremely slow but it could be done. Try that with VMware.
This Containers is the building block for virtualizing everything, the Network stack, etc. Everything is virtualized in Solaris 11. Create as many network cards as you want, create as many Containers as you want, etc – thus you can have lot of virtual servers in one Server. This is why Solaris11 is called “First Virtual OS” ever. In Linux, if you create a container, is everything else virtualized? The network stack? etc? No?
What you say is true but it’s not the entire picture. Virtualization is getting better by the day. There are memory ballooning drivers that will report the free memory in the guest so it can be marked free in the host. Same pages merging can also decrease the memory footprint of a VM. Using KVM with the appropriate Linux kernel inside occupies surpisingly little memory.
You have to specify which Linux container flavor you have in mind. Both OpenVZ and LXC have network stack virtualization. In fact, OpenVZ has everything Solaris Zones have (maybe not, I’m not familiar with Zones but OpenVZ is very feature rich). LXC has some shortcomings (e.g. there’s no UID/GID virtualization) but is there in all recent kernels so very convenient to experiment with.
In short, I don’t think that Solaris can boast big advantages in virtualization of any kind compared to Linux
Solaris has advantages over Linux virtualization.
(Solaris Containers are more mature than Linux. Solaris Containers started development in 1999, under a different name. Linux Containers came sometime early 2000.)
Solaris have more different virtualization techniques, for instance LDOMs, Containers, and probably a bunch of others.
Also, Solaris have virtualized everything, including the network stack. Linux has not. That is why Solaris is “the first Virtual OS” – a gimmick, yes. But still it is true. Thus Linux is copying Solaris Containers, as Linux copied ZFS (Btrfs) and copied DTrace (Systemtap) and probably copied a bunch of other Solarsi stuff as well.
As Sidicas said:
“The new Solaris supposedly lets you set up virtualized “zones” so you get all the benefits of virtualization without any of the drawbacks of losing all the hard drive space to multiple operating systems or getting hit with the redundant OS overhead of running multiple OSs, or having to worry about security updates for multiple OSs, on every server, etc. etc… It’s sort of like Virtualization meets Chroot.. Then consider that you can easily take these “zones” and automatically duplicate them over to other hardware to add in redundancy.. Now imagine tens of thousands of servers where every server has their “zones” synchronized onto at least a few other servers which might not even be in the same country, let alone the same room… Where you can just walk around and power off random servers or even an entire data center and it won’t matter and the customers won’t even notice because of all the “enterprise class” redundancy… This is a “cloud” solution.. A whole ton of money poured into massively redundant self-managing server infrastructure and Oracle wants to be in on it…”
Oh, I know about that stuff. I just don’t care.
It’s a nice feature, don’t get me wrong. But the worse part about DTrace has always been Sun, and now it’s Oracle. Solaris isn’t widely deployed, is getting less widely deployed, and it’s been off the widely deployed radar for about 10 years now. DTrace isn’t enough (for me, and for most people) to switch.
Again, I know all about ZFS. I love it, but like DTrace, the worst part about ZFS was Sun and now Oracle. Sun was terribly run, and Oracle is just… has left a very bad taste in my mouth.
A real effort has been made in the last few years to move away from being OS dependent. To keep the apps and data separated from the OS, or to use virtualization technology (like snapshots) to obviate issues like that. Nice feature, but eh, not enough to switch.
Containers I’ve found to be the least useful. It’s all one OS.
[/q]
With ESXi, KVM, Xen, yes, everything is virtualized. And honestly, it’s much more flexible and useful than containers. Any OS, not just Solaris. Add as many network cards as you want. I can migrate a VM from one host to another live. Plenty of open source and closed source appliances for storage, firewalls, IPS/IDS, etc.
Sun made Solaris less relevant, and Oracle made it even less so (by closing it back up and abandoning OpenSolaris). It’s a shame, but that’s why I say ‘meh’ with Solaris.
I heard that KVM is being ported to Solaris. Is it true ?
Does anyone have experience with kvm on Solaris?
Is there another way of full hardware virtualization on Solaris?
It wouldn’t surprise me. Oracle actually has a virtualization platform (Oracle VM), but is basically a Xen hypervisor.
Yes, OracleVM is Xen…but it’s Oracle Enterprise Linux (RHEL) based, not Solaris.
Joyent has SmartOS which is the Illumos (forked OpenSolaris) kernel + KVM ( http://smartos.org ). Not sure if the KVM work is moving back into Illumos. In which case OpenIndiana, nexenta and other Illumos siblings could have KVM capabilities in the future as well.
Solaris Logical Domains for the Sparc platform is a fully virtualized environment. This has been renamed under the umbrella of Oracle VM and aligned with the xen effort for x86. Resources are allocated from a VMM or two (service, control domains). The OS that is installed does not require modification and includes both Solaris sparc and Linux for sparc.
Ok, it seems that you dislike Solaris because of political reasons. That is fine with me. But on the other hand, Sun did open up most of their sources, I would like to see MS or IBM open up all of their sources. Only Sun of the big companies, did that. No one else. Did you see MS open up Windows? Sun payed 90 million USD to get licenses to open up OpenSolaris (it used proprietary libraries)
It is because you are not a developer. There are several Linux devs that have switched to Solaris just because of DTrace.
I did not understand this. If you do a patch or get a virus or something bad, then how does it help to be OS independent? You still need to reinstall everything, or do hard work with the CLI to try to repair. Or, you can just reboot into GRUB and you are done.
It is because you are not a sysadmins at a Enterprise company. Solaris is for Enterprise. A desktop user might not find Solaris compelling. Now Illumos (the open sourced community driven Solaris kernel) is working to bring back Linux Container again, into Solaris.
It uses much more cpu and RAM than Containers. IBM has even copied Containers, and IBM would not copy containers if it not was useful. IBM knows virtualization.
OpenIndiana, Nexenta, SmartOS, etc is out there. OpenIndiana is a direct fork of OpenSolaris. Community driven.
I would add DTrace to your list of nice Solaris features. However, your list of features are not all exclusive to Solaris. Similar to ZONES, AIX has LPAR and WPAR. I think that HP-UX also has something similar for their Superdome systems. I don’t know much about SMF, but I don’t think its functionality is exclusive to Solaris. The virtualization capabilities with LPARS on POWER5 (and newer) hardware is pretty darn impressive and is perhaps the most sophisticated of all virtualization technologies available.
Close enough! ;p
Well, ZONES are WPARs and LAPRS are LDOMs in Solaris world, dunno how Oralce calls them now, but they were called LDOMs on T1 and T2 in Sun(ny) times.
HP-UX is quite dead after Oracle said ‘we do not do Itanium any more’, but yes, they have similar features.
He’s since changed his tune a bit… but check this out, pretty funny stuff where Larry freaks out about “The Cloud”.
http://www.youtube.com/watch?v=KmXJSeMaoTY
Actually Larry been a big proponent of it for ages
http://www.youtube.com/watch?feature=player_detailpage&v=8g_tcdR_pQ…
He also predicts something like the iPad in 1995 … these guys aren’t stupid … I wish people would stop pretending they are brighter than likes of Gates, Jobs and Ellison.
In the same clip he is also talking about downloading and installing an OS over the internet … something we have only just achieved.
Edited 2011-11-10 23:59 UTC
A lot of people brighter are but probably not when it comes to business-smarts.
I haven’t seen the clip but…say what? That has been possible for a long time. Granted it would take some time due to lack of bandwidth but technically possible.
Only recently I think the tech has caught up with the ideas.
Yeah, I remember when he went full on in the late 90s with networking computing, had some crappy product and then gave up. He never was able to productise it though effectively, like Gates with tablet computing. There’s another video floating around with Steve Jobs talking about “NFS dialtone” from 1997 just after NeXT got purchased by Apple. Similar concept that Chromebook is doing.
I too watched the video and my interpretation was that there needs to be a reality check on what is and isn’t cloud computing – cloud computing isn’t some sort of magical pixie dust that can turn a crap product into something everyone desires just as 5 years ago simply saying you support Linux or going to open source a product didn’t automatically resolve all the underlying fundamental issues being faced by the said company and their products.
Oracle, Sun and IBM have been pushing cloud computing before it was called cloud computing – heck, cloud computing is little more than a rebadged idea that existed 30 years ago but instead of a dial up connection to a logical mainframe for a shared computer time we now have semi-smart computers that log onto a computer on the other side of the world via the internet and voila do the same stuff as before.
What this means to you, Thom, is that you are closer to reporting about the Solaris spinoffs plundering the Solaris 11 source code for the good of the universe
(FreeBSD, ixsystems, Nexenta, Joyent, Illumos, OpenIndiana…)
I was skeptical at first, but there is a healthy (though tiny) ecosystem thriving on the shattered remains of sun microsystems solaris.
A quick factual correction: Solaris was initially a BSD spin-off, back when it was called SunOS. While FreeBSD has incorporated some Solaris code, it is an insult – not to mention ignorant – to call it a Solaris spin-off.
Can I get list of app which will be broken after I updrade (hint: java )
http://wiki.openindiana.org/oi/Building+Openjdk
How is Solaris 11 “technically” the 7th major version exactly?
Solaris 2.0 was released in 1992, if anything this is the 12th major version of the OS.
They skipped between 3 and 7
http://en.wikipedia.org/wiki/Solaris_%28operating_system%29…
Huh? No they did not, read the link you provided.
I used a Solaris 2.6 box for years.
How about you read it … there was no version 3 it jumped to 7.
I actually agree with you, but I count 13 major releases…don’t forget 2.5.1, it was the updated release of the 2.5 version with “bug for bug compatibility” for the SPARC, X86 and Power platforms.
The Solaris 2.X moniker was to denote it’s System V heritage, while the 1.x was for the BSD based OS. At the release of 2.7 they dropped the “2” prefix (I’m assuming because 1.x was pretty much unsupported by that time and no need to differentiate any longer). I have always counted the “SunOS” disignation as the release, since that is what the OS actually reports on a running system anyway (Solaris 11 = SunOS 5.11).
But to each their own, if you want to count 7, 12 or 13 it doesn’t really matter much. It’s the changes under the hood that count. (Sorry for the car analogy.)
I was wondering about Thom’s counting scheme, I know journalist majors are on the arts and letters side of campus, but I assume basic math was obtained sometime before finishing highschool..
Does Solaris have comprehensive, easy-to-use GUI tools for administrators? When I supported it years ago, its GUI admin tools were rather skeletal and really not much used by most admins.
In contrast, the GUI admin tools for AIX were excellent, and those for HP/UX pretty decent. In AIX world you really didn’t need to do command line administration at all (except in special cases).
Did Solaris admins ever join the GUI world?
So you haven’t discovered the Solaris Management Console, have you?
Furthermore, I never administered a system in the GUI since most of my systems were core installations, from which I removed 50 or 60 useless packages (FibreChannel, iSCSI, Infiniband, J2SE, J2RE, and many others if they were unused) and added only the ones that I needed. The GUI wasn’t on that list. The footprint of such an installation was 300MB.
I’ve applied custom hardening profiles using SUNWJass afterwards and a decent firewall configuration on each system independent of the switches, firewalls, routers, load-balancers in front of me and always used IPSec between servers, even adjacent ones.
That’s why my servers were never hacked “Sony style”. It’s much better to be paranoid than to recover from a hack. Having so little software on the systems meant that upgrades always worked like a charm and that I almost never met with bugs in software. Reducing the complexity of the system to the absolute minimal (and slightly beyond, if possible) is the way to go.
According to the Solaris 11 launch video Oracle has some pretty comprehensive management tools that allow you to manage the whole thing: Middleware, operating system and hardware with the ability to link into Oracle so that if a hardware part is failing you can order a replacement straight from the console etc. The downside is that they’ve EOL’ed much of their hardware (check the EOL and removed features) which is good for those running the latest machines but bad for those who wish to have the latest features but not upgrade the hardware.
Oracle did a good job,but believe me folks with that licence they will vanish just like skyOS,Linux have the ability to win this competition, future for Linux it just a matter of time.
Give Solaris 11 a go – it isn’t setting the world alight in terms of eye candy but if you have the supported hardware it is light years head of where it was at least a year ago. If you’re hankering for a consumer operating system then you’ll be sorely disappointed but if you’re looking for a enterprise class development platform then Solaris 11 won’t leave you disappointed.