One of the ongoing system administration controversies in Linux is that there is an ongoing effort to obsolete the old, cross-Unix standard network administration and diagnosis commands of ifconfig, netstat and the like and replace them with fresh new Linux specific things like ss and the ip suite. Old sysadmins are generally grumpy about this; they consider it yet another sign of Linux’s ‘not invented here’ attitude that sees Linux breaking from well-established Unix norms to go its own way. Although I’m an old sysadmin myself, I don’t have this reaction. Instead, I think that it might be both sensible and honest for Linux to go off in this direction. There are two reasons for this, one ostensible and one subtle.
Something tells me distros like Slackware, Alpine, and Void Linux will avoid these changes just as they have so far successfully avoided systemd and its perpetually buggy state.
I really, really don’t get the “break ALL the things!” mentality of modern Linux hackers. Okay, so you can boot three seconds faster than before…but now your computer hangs on shutdown, your logs are binary and all but the last successful boot log are corrupted, and memory use is through the roof for basic processes. But hey, fuck the user, let’s break something else just because we can!
while I agree in principle I kinda have the feeling that /proc will go away eventually
but that might mean ifconfig etc might need code changes… oh the horror…. erm so what…
This is possible, but does not imply that the tools’ names need to be changed. The cool thing about source code is that you can decide how the resulting binary should be named. So maybe completely change how ifconfig works on the insude, but keep the same name on the outside.
Abstraction is one of the great advantages of Linux and UNIX: You can do things in a similar (or even same) way, for example with a shell script, without having to know how the internal implementation of the tools you are using does work, which facilities it queries, or which library calls and library functions it utilizes.
Honestly, as a sysadmin I don’t have problems learning new things. This is essential for being able to do a good job, there’s just no way around it. But imagine you are a “n00b” and try to find online resources using a Google search (because people told you that this is the only way to get knowledge): Terms like “ip” aren’t that unique to a search engine. “Linux ip change address” could be such a search term, and the list of results doesn’t need to include all and only examples using the “ip” command. Even worse, if you just compare the result pages for “ip” vs. “ifconfig”, you’ll experience an unpleasant surprise (which of course isn’t even a surprise if you have a little knowledge about language and how search engines work).
So it would surely be possible (and probably advised) to re-invent ifconfig from the inside, keep the traditional interfaces, and add new functions as needed.
Naming things is one of the 2 problems in modern IT. And giving new tools stupid names that nobody can spell or pronounce properly isn’t a solution…
You know that /proc is a linux-specific hack that didn’t exist in Unix / *BSD?
Yes, it’s true– ifconfig and netstat used to (and still do) work in systems without /proc. At one time, the lack of procfs was one of the major stumbling blocks to getting Linux apps to run seamlessly on FreeBSD.
/proc will never go away. It’s too convenient.
Huh? Linux specific hack?
/proc was invented by Sun.
You know back before /proc tools like ps had to read the data out of the kernel directly. It was terrible.
Wikipedia has a nice entry about procfs: https://en.wikipedia.org/wiki/Procfs#History
Procfs was introduced by Unix 8th edition in 1984. Later on Plan 9 expanded the idea. Then other Unices cloned the Plan 9 implementation – 4.4 BSD, Solaris and Linux.
But OpenBSD 5.7 removed it. And FreeBSD deprecated it.
Avoid what changes? The kernel networking interfaces? Or the tools that accurately reflect the nature of your system? I’m really confused.
Maybe you didn’t read the article? His example was pretty clear as to what the real problem is and the difficult choices that people are presented with : either allow old tools to lie about the system or break scripts that rely on them.
And what is the problem associated to add an option like, for example: ‘-f’ ‘for full-disclosure’ to ifconfig and let it produce an accurate and detailed output?
I’m a bit annoyed by false dichotomies people keep presenting. In the end, instead of dealing with just one more format to parse we have a complete new set, what is the gain?
And lets not start to talk about the commonly incomplete and frequently bug ridden “I-do-not-want-to-look-someone-else-code-and-try-to-understand-it-so- I-coded-a-fresh-app” that takes ages to stabilize.
I have nothing against improving situations where things are not working properly but, really, most of the times what we see are pet peeve excuses because, as developers, “we-do-it-because-we-can”.
Yeah, they could add a -f flag or whatever, but they’d have to completely rewrite the tool to use the newer better interface.
So disrupt an existing tool that works fine for most people in most situations, but can still lie.
or write a different tool?
Btw, I don’t think anyone would say that ss or ip are “buggy” or “took ages to stabilize”
I understand that does happen, but not in this case.
This shift has been happening for years, and it’s one that’s been taken up by pretty much every distro out there.
Alpine already uses iproute2 instead of ifconfig in it’s internal networking scripts (well, actually it uses the busybox implementation of the ‘ip’ command, but you get the point).
Void also uses the ‘ip’ command in it’s scripts instead of classic ifconfig (not that it really matters with Void being functionally dead right now).
Slackware I’m not sure about, but I’d be willing to bet that they’re one of the last not using it.
The reality is that almost everything is using either iproute2, or the Busybox implementation of the ‘ip’ command internally right now, and the only reason they keep around ifconfig is for compatibility.
the performance argument ….
These systems have been around for ages – back when computers, disks etc were much *much* slower than today….
Yet now it’s a problem?
Linux runs on a wide variety of platforms with very wildly varying workloads. It is very presumptuous of you to assume a performance improvement is not needed. If its faster because people need it to be faster, thats a good thing ™.
Some more color on this: its about the performance of the underlying technologies rather than the tools themselves.
old tools us /proc fs which is slow
new tools use netlink which is fast
If you need network stats rapidly, use netlink.
https://www.vividcortex.com/blog/2014/09/22/using-netlink-to-optimiz…
But yeah the performance of ip vs ifconfig probably doesn’t matter much to people unless you are doing something really odd.
First off, performance is a good thing, period. There is absolutely no excuse for not writing efficient and performant code.
Second, scalability is the issue here. There are systems with hundreds or even thousands of network interfaces out there, and doing simple things like listing all of them with `ifconfig` takes a long time. Same goes for ‘netstat’, though that’s even worse, because it’s not unusual for a busy server to have over a million open network connections.
Third, the bigger issue with using /proc that they don’t mention is that it’s racy. You can’t do almost anything atomically when dealing with /proc, which is a bit problematic when you need reliable information.
Just invent a tool, that can take care of all this, editing the appropriate config files.
You do not need to replace, you just have to build a decent tool, that can be installed optionally. No need to scrap everything for the sake of new. That way, the old conservative sysadmins can still be doing things, the way they feel compfortable with.
The argument is not about coming up with better tools but removing existing ones and breaking things on the way just for the sake of it. Nobody would care, or pick up, that stuff otherwise.
Edited 2018-05-26 08:02 UTC
Every time classic (and working) tools are obsolesced by a fancy new counterparts is like a shoot in the foot for the Linux community.
That’s not the best move if we want to attract people to the Linux Universe.
*sigh*
Yeah, no.
How exactly is it so much of an issue to type ‘ss’ instead of ‘netstat’, given that they take pretty much the same options (at least, the ones people actually use with any regularity are the same), especially considering that ‘netstat’ literally doesn’t properly show all the information (for example, it doesn’t display if multiple processes are listening on the same socket, either via SO_REUSEPORT, or through other means).
‘ip’ versus ‘ifconfig’ I can understand a bit more (to someone who doesn’t know much about networking, the ‘ifconfig’ output is a bit easier to understand than ‘ip addr show’), but it still doesn’t consistently show everything like it should (it doesn’t show address lifetimes for example, which is extremely important in some configurations).
I used to have a hard time justifying the Linux specfic networking tools in particular, but just this very day I found myself using ss (to work out the name of the process with some open ports I didnt recognise) and nmcli/firewall-cmd (to setup a bridge and hook in some VMs and set firewall zones etc).
With nmcli in particular – and this may be entirely my flawed recollection – it seems to have become much easier to use in the last few years. To the point that if you know anything about networking, its usage is essentially self describing (certainly for setting IP addresses and DNS parameters anyway). As for ss, first time I^aEURTMd used it today, and it seems a whole lot less obtuse than netstat for my usecase (listing listening tcp and udp ports and their owning process).
Not saying there is anything wrong with the old way of doing things, but I setup a minimal Fedora VM and FreeBSD VM (both for dev work) and from the point of view of having to read less documentation, I found the Fedora VM a bit easier to setup.
Edited 2018-05-25 23:22 UTC
nmicli is harder to defend. Its command set has changed pretty quickly and without much real understanding on my part why it needed to change. There are reasons why the tools needed to change based on the changing network abilities of linux. Not so for nmcli that I’m aware of.
Edit: was a bit harsh in earlier comment about nmcli. it annoys me, but maybe there are reasons I’m not aware of why it changes so much…
Edited 2018-05-26 03:09 UTC
A LOT of system-level changes to FOSS operating systems are driven by the needs/wants of data-centers.
The main advantage of journaling filesystems over soft updates is that they don’t need a background fsck after a crash, but I never found that to be much of an issue on my personal FreeBSD/NetBSD/OpenBSD systems. (It was a problem on Linux systems with ext2 partitions mounted r+w because in that case fsck has to look for more types of corruption and potentially try to guess how to repair things.) The only time I could imagine really wanting a journal is if I had huge, slow disks, or such a large array that a background fsck was killing system I/O. That’s pretty hypothetical for me.
Systemd was another yawner for me because BSD Init was already reasonably fast. The only systems that I found unpleasantly slow were older Windows and Solaris revs and linux distros with SysV Init. (I haven’t run Slackware in so long that I can’t remember how fast their BSD-style Init was.)
Some automounting systems also try to streamline things that are already fast in ways that make them inflexible in the process. Most systems already had a daemon that was aware of attach/detach (and similar) events and could call a shell script in response to them, which let you handle the rest in ways that are generic, portable, desktop-agnostic and played nice with the shell. When I set my system up that way, the hardest part wasn’t getting the low-level stuff to work (which was easy); it was that KDE and Gnome were hardcoded to work with specific backends. KDE has fstab polling on some systems, and that gets the job done, but there are more elegant ways to allow the GUI to get the information it needs without depending on specific kernel features or daemons (environment variables, config files).
Systemd wasn’t invented for speed, so much as correctness in process management. A lot of people, to your point don’t care about managing processes, especially desktop sysadmins with simpler workloads and less painful uptime requirements.
Before journaling filesystems the size of a big disk was 2 GB.
You sure wouldn’t want to fsck today’s 8 TB drives on ext2.
It seems that ifconfig is capable of showing multiple ipv6 addresses, but only shows a single ipv4 address – likely due to old scripts that parse its output and dont expect to handle multiple addresses. Any scripts dealing with ipv6 will be newer and already taking that into account.
Actually, I remember a time when ifconfig could easily add / remove / show multiple ip addresses on a single nic.
I think it was last week.
They used to show up as eth0.1, eth0.2, etc.. Well, that’s not entirely true, since I was on a FreeBSD box– was probably en0.1.
My problem with “ip” is it’s a tool for managing IP. ifconfig is a tool for managing interfaces. While there’s some overlap, once upon a time, we had other protocols besides TCP/IP to worry about.
Does that matter so much today? Probably not.
Of the “modern” network managers, I prefer wicked (in spite of poor adoption)– but for servers in a datacenter, it’s hard to beat ye olde ifup/ifdown config files.
I dislike Network Manager because I feel it goes out of it’s way to make things obfuscated. Also, it wouldn’t let me define my primary IP as DHCP and my secondary (on same interface) as static.
You still can do things the old ways in Linux. Use one IP per virtual interface. It’ll still work.
You just can’t use multiple IPs on one interface via “ip” or some other tool and then also use “ifconfig”
I just don’t get (don’t want to, really) the mentality. Keeping tools up to date, making old tools smarter, i.e., adding functionality without breaking everythng should be the way to go in cases like this. Making gazillion new tools, with needed functionality but too often surrounded by some wierd sense of new age elitness of we-can-do-better, well, I don’t feel the niceness anywhere.
FOSS is [also] about having choice, the freedom to use the OS and the tools you want, however, if not done properly, we could end up with even more serious fragmentation (besides using systemd or not, which is already bad enough). My point is, these days cooperation in the FOSS world seems to erode quite quickly and being replaced by forced choices.
I don’t mind if new tools come around and make our lives easier – although the frequently mentioned multiple IPs per interface as an example for “ip”s superiority is quite bogus – and I personally don’t appreciate the systemd-way of doing things, i.e., aggregating multiple tools functionlities into one tool. It’s not keeping it simple, and it’s not even a “modern” mentality (that word keeps popping up wherever ip and ss are mentioned, don’t much appreciate that). Oh, btw, using “ss” as a name (besides historical stuff), I always found that stupid, nothing good with it except it’s shortness (which doesn’t much matter anyway).
I don’t get your complaint. Just go your own way and don’t use the new tools.
interface.X nomenclature is used to represent vlans, not IP addresses.
On FreeBSD, multiple IP addresses are listed one per line in the section for the interface:
igb0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWC SUM>
ether 00:25:90:56:12:88
inet 10.1.2.3 netmask 0xffff0000 broadcast 10.1.2.255
inet 10.1.2.4 netmask 0xffff0000 broadcast 255.255.255.255
inet 10.2.3.4 netmask 0xffff0000 broadcast 10.2.3.255
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
On Linux, multiple IP addresses are added to separate alias interfaces that take the form interfaceX:Y where the main interface is just called interfaceX.
eth0 Link encap:Ethernet HWaddr 52:54:00:07:cc:9f
inet addr:10.1.1.32 Bcast:10.1.1.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe07:cc9f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:98469432 errors:0 dropped:1787483 overruns:0 frame:0
TX packets:53650870 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:129858679741 (120.9 GiB) TX bytes:20517479875 (19.1 GiB)
Interrupt:10 Base address:0x4000
eth0:0 Link encap:Ethernet HWaddr 52:54:00:07:cc:9f
inet addr:10.1.1.33 Bcast:10.1.1.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe07:cc9f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:98469432 errors:0 dropped:1787483 overruns:0 frame:0
TX packets:53650870 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:129858679741 (120.9 GiB) TX bytes:20517479875 (19.1 GiB)
Interrupt:10 Base address:0x4000
Personally, I like the FreeBSD display a lot more than the Linux display. And I prefer the FreeBSD ifconfig (which is used for virtually everything related to configuring a network interface, regardless of type) compared to the Linux bazillion-and-1 different tools for configuring different parts of an interface.
I really wish the Linux devs would get “fix the existing tool, don’t just write a new one” through their heads. Don’t add another layer on top to try and paper over issues; fix the damn issues and modernise the existing tools already!
And of course the FreeBSD one doesn’t use /proc either, which voids that whole line of specious argument in the original blog post too.
Both I and the original blog poster are merely assuming that /proc is slower than the platform-specific alternatives: it seems plausible, but I see no measurements in evidence, or even evidence that ifconfig performance is an issue in any particular situation. Certainly isn’t on my systems, given the number of times interfaces are configured or queried.
If standard tool is inaccurate, why replace it with new but not fix existing?
Because other older scripts people have written assume the old behavior. So when you upgrade to a new os version, nothing changes and your system works the same as it always did. If your set up needed the new correct output, your tooling would need to change anyways.
IOW, histerical raisins are raised as an excuse for not fixing things in Linux-land when every other OS out there has figured out how to do this without issues.
There’s nothing wrong with keeping backwards compatibility for awhile, having a proper deprecation timeout, and removing the old features in a new release.
FreeBSD users have gone through many changes to ifconfig output over the years without bringing the Internet to a grinding halt. And now you can use ifconfig to configure pretty much any kind of network interface, with multiple IPs, with fail-over via CARP, with bridging, with vlans, with wireless, etc. And everyone’s scripts continue to work.
Not wanting to fix the existing tools is just a sign of laziness and immaturity as a developer (“it’s too hard!”). But, that’s the thing with Linux devs: why fix the existing tools when we can just develop new shiny ones!
No, its not a laziness of the developer, its the concern for not breaking other peoples use cases. The right solution for accessing the data is different. I don’t think completely rewriting the tools but keeping the same name was the right solution. But perhaps, you could ding the devs of the new tools for not following or at least allowing similar output as the old ones.
Because Not Invented Here, of course!
So /proc is going away and is depreciated That’s just fine, even commendable.
But why change the names and parameters of well known commands when it is not necessary? This makes no sense from either a training standpoint or from the point of an administrator.
Now all those shell scripts that everyone loves to use have to be rewritten. Backported scripts or skills learned on other Unixes won’t serve. Everyone has to learn a new command set, that will be unfamiliar to people who use a different or older UNIX. Nothing is gained by the change. There is no real benefit.
Once again, the Linux distribution community has no sense! This is just another example in a long list of changes made for the sake of change.
Edited 2018-05-27 01:55 UTC
ifconfig and route produce far better output by default than ip.
This is the reason why I will keep on using them as long as possible.
Young people that come in, and I work at a fairly well known bank that does big data, and decide….you know what!?
Since we have to build a large number of nodes in a cluster, I think we need to use 6 different languages/config formats to config the boxes and build them, because YAML,python,ruby,salt,chef are new and cool.
Did I miss anything?
Honestly I think somewhere someone has way too much time on thier hands.
I have always built large numbers of machines with just a couple of tools:
BASH,RPMS,TFT,DHCPD,NFS for all phases of security, configuration management and bare metal building.
Thats about it. I am forced at work to use all of these nutty variations of languages like puppet for example.
All these new system admin tools seem to be directed at an effort to throw 25 years of building POSIX UNIX systems under the bus. With no real advantages, and in many ways lots of disadvantages, to building and configuring machines.
INCREDIBLY IRRITATING.
I love all the comments acting like to is something new being pushed by those crazy kids and the same folks “forcing” systemd on linux. The deprecation of net-tools was announced 9 years ago and at that point net-tools hadn’t been updated for 10 years.