I would honestly serve at the altar of the person that did this. Keep the debugging information, but for the love of god, make your email client do something pretty and useful with it.
I would honestly serve at the altar of the person that did this. Keep the debugging information, but for the love of god, make your email client do something pretty and useful with it.
It is quite clear from the screenshot of the first, “machine” response that some mactard tried to send a 44.6MB email.
It states twice that the email was too big, and it gives the exact size of the rejected email followed by the exact email size limit.
People, email is not the way to exchange huge files, for many reasons, not the least of which is that one often does not know what kind of burden a huge file will put on the email recipient.
Furthermore, let’s not add to the bandwidth strain by putting fancy graphics into automated responses.
Edited 2012-04-12 10:00 UTC
This isn’t part of the email – the email client parses the error bounce email and turns it into this.
Indeed, it can’t be all that hard to do a search of a bounced-back email like that for keywords which trigger a response template like the one shown. Especially Apple, as the kings of polish and facade, I would have thought would sort that out.
I know different mail servers will respond differently, but they’ll generally follow the same patterns. It’s able to do it with networking protocols like Samba (which also differs in its response from server to server) so why not with mail?
Well that’s certainly better for the Internet.
However, it would be more advantageous if people would merely know better than to send such a huge email — we still have the bandwidth wasted with the original outgoing email, plus the size of the bounce back.
And the world would be much improved in general if people would must use their brains more — it won’t hurt to much to try to read and understand the horrible “machine” response.
Instead of “Think Different,” a wiser slogan is “Just… Think.”
Edited 2012-04-12 10:15 UTC
True, although usually these bouncebacks don’t include the original attachments, so the reply should’ve only been a few KB at most. What I do find interesting is that the limit is something like 34MB – a limit which may have convinced the poster in the past that it *was* ok to be sending uncompressed Photoshop documents as attachments.
I’ve had to spend considerable time and effort over the phone to various people to explain to them about filesizes, how to find them out for files, and why they are important. In these times of 100Mbit broadband, multithreaded email clients and terabytes of hard drive space, the average user has no need to worry about file sizes in everyday life… Which is fair enough in most cases, but shouldn’t mean they end up completely ignorant to it either.
Even the email itself, without attachments, can be made more “bandwidth-friendly” by using plain text. It’s rare that I use anything other than plain text to send emails.
If everyone thought like this, we would be still using DOS.
I cannot fathom why anyone would think that displaying a friendly error is a bad idea.
You are also assuming that someone using the client has a conceptual understanding of how email works, beyond “I create a message and send it”.
Whether it is feasible is an entirely differently subject.
Edited 2012-04-12 10:49 UTC
“differently subject”, I suck at Grammar.
That’s like saying, “if everyone knew math, we would still be using pen and paper to calculate, instead of calculators or spreadsheets.”
No. If people used their brains more, we would probably have greater advancements, and people wouldn’t have to schedule an appointment at the “Genius” bar every time they need to have their fly zipped.
Because it uses more resources, and it is unnecessary, if one merely reads the text error.
Also, using a “friendly,” interactive, graphical interface in place of something that is easily explained in a brief text message tends to make Johnny stupid and helpless.
Somehow the point was missed. I am not assuming anything — I am suggesting that we put efforts in to educating people in basic computer literacy and that we encourage people to think.
Educating people in rudimentary literacy is certainly feasible. Literacy was much greater in the middle of the 20th century (when there were no GUIs).
Likewise, there have been several ages of enlightenment and independent thinking, throughout history.
Edited 2012-04-12 20:39 UTC
Your argument is so idiotic I honestly don’t know how to reply. But I will make an attempt
Having something clearly pointed out is advantageous, it means I can use my brain to do much more useful things.
It kinda why they have road signs … I get the message quickly, and allows us to concentrate on driving.
I am a Web developer, I normally do things such Web service integration. Your stance is the equivalent of saying that you shouldn’t have custom error pages on a Website.
When I am developing I want proper filthy looking errors with lots of detail, when I am browsing … I want a nice friendly message.
The resource on the client argument is also utter bullshit, my phone can render particle physics at 30 FPS in the browser. I think it can handle parsing some text.
How parsing an email message and displaying a clear and concise message to a sending failure, turned into a diatribe about how computer literacy and standards of education is utterly ridiculous.
In any case understanding the principle is more important than understanding the specifics … parsing error messages are specifics and doesn’t improve anyone’s conceptual understand of how email works. It only teaches someone to pick apart a text string.
I am really fed up of geeks that seem to live in ivory CLI towers and mentally masturbate about how clever they are for understanding archaic syntax. It doesn’t make you clever and those that aren’t interested in reading it aren’t “idiots”.
It just means they have got other shit to do that they think are more important, like actually sending the email.
Edited 2012-04-13 10:24 UTC
“message size X exceeds size limit Y” is one of the more easier bounce message to understand, really.
Unfortunately there’s no standard way to deliver useful bounce information to the end-client since the format of the bounce message is not standardized anywhere. This means that you end up getting free-form text bounce messages of various degree of usability. Some servers, notably qmail, even cuts the bounce short. It’s pretty darn hard to parse this reliably on the client so no, machines can’t easily read this.
… like insert advertisements to Apple’s newest iService.
What is wrong with that?
With the example given, you are using the following in most likelihood:
* Apple branded Computer
* Apple branded OS
* Apple branded Mail Client.
Other than the most obvious reasons, Apple doesn’t doesn’t have any legal fallback if they suggest a 3rd party solution, also other services maybe incompatible with Apple’s T&Cs.
So it does sound clever to say “lol Apple promoting apple shit” … but there are very good managerial and legal Reasons why companies do this.
Edited 2012-04-12 18:07 UTC
The problem isn’t with the e-mail clients, it’s with e-mail protocols themselves.
E-mail is as archaic and broken as FTP (also, why are we even still using FTP?!). Just adding an attachment (typically in Base64 encoding) will inflate the file size by around 1/3. How on earth we’ve put up with that for this long is beyond me given how valuable bandwidth was (and, for many, still is).
Then you have the whole hodge-podge of half supported “standards”: multiple different ways of HTML encodings, plain text, no standard failure responses, no native compression, no native encryption (I know SMTP can be SSL encoded, but that’s not even available as standard on many servers).
It’s quite simply just a horrible mess so I’m amazed it even works this well.
Edited 2012-04-12 10:34 UTC
The thing about FTP, though, is the standard is so simple, and for the vast majority of servers it Just Works. I don’t see what needs to change, it’s dead simple.
It doesn’t though – there’s a whole series of hacks from your router (eg FTP doesn’t natively work behind NAT nor firewalls without adaptive routing) through to the client itself. (Sorry about the rant I’m about to launch into – it’s nothing personal )
Every FTP server (read OS, not daemon) returns different output from commands such as dir, so FTP clients have to be programmed to support every server (utterly retarded standard!!)
What’s even worse is that FTP doesn’t have a true client / server relationship. The client connects to the server and tells the server which port the server should connect back to the client on. This means that firewalls have to be programmed to inspect the packets on all outgoing port 21 connections to establish which incoming connection requests to port forward. It’s completely mental! This means that the moment you add any kind of SSL encryption (which itself isn’t fully standardised and data channel encryption isn’t always enabled even when the authentication channel is) you can potentially completely break FTP.
Add to that the lack of compression (no compression support on a protocol named “file transfer protocol” – I mean seriously) and the very poor method of handling binary and ASCII files and you’re left with an utterly broken specification.
I will grant you that FTP is older than the WWW. FTP harps back to the ARPNET days and it’s protocol actually makes some sense back then (All clients were also servers. All machines were known and trusted so servers could happily sit in the DMZ and you incoming connections already knew what OS they were connecting to…etc)
However these days FTP is completely inappropriate. SFTP at least fixes some of these things via SSH tunnelling (compression, guaranteed encryption on both data and authentication channels, no NATing woes, etc), but that in itself can cause other issues (many public / work networks firewall port 20, SFTP servers can be a pain to chroot if you’re not an experienced sys admin, etc).
It just seems silly that FTP has never undergone a formal rewrite. Particularly when HTTP has undergone massive upgrades over the years and there’s been a whole plethora of significantly more advanced data transfer protocols from P2P to syncing tools (even rsync is more sophisticated than FTP). From cloud storage through to network file systems. FTP really is the bastard child that should have been aborted 10 years ago (sorry for the crude analogy, but I can’t believe people still advocate such an antiquated specification)
</rant>
Edited 2012-04-12 14:01 UTC
What todays is a headache for network security admins, it was a useful feature in the times when FTP has been created: it was possible for a user who had only low speed connection with two hosts, to pilot a data connection between them (via a high speed link).
This feature is sometimes called FXP, but it’s already part of FTP protocol, RFC-959.
chmeeee,
“The thing about FTP, though, is the standard is so simple, and for the vast majority of servers it Just Works. I don’t see what needs to change, it’s dead simple.”
Oddly my experience is different.
I’ve implemented FTP software, so I appreciate it’s simplicity. But in practice I find protocols that span multiple ports to be a bad idea. They cause problems with firewalls and routers, fundamentally requiring very ugly stateful application level gateways to work. Plain FTP usually fails to servers without “passive mode” hacks. Even then it fails between most normal peers. The default ASCII transfer modes can easily cause corruption and doesn’t serve much purpose these days.
SFTP is perhaps too complex (being an extension of SSH and all), but network-wise for the most part it just works on all networks that don’t have it blocked. It can easily be run behind a NAT on any port one wishes. Obviously it’s more secure too.
For me, rsync and sftp more than make up any void left by FTP.
Edited 2012-04-12 15:43 UTC
So true. Just pray you never have to deal with FTPS, which, despite sharing a similar acronym to SFTP is competely different and is a total car crash. It’s basically FTP over SSL, which is FTP, with all its warts, plus SSL grafted on. It comes in implicit and explicit varieties that change the way the ports are allocated (ala passive and active FTP) and it is a bitch to configure a firewall to allow is as well as regular FTP.
Standard compliance is pretty good these days, even in Exchange. I can’t vouch for the billion badly coded email clients but that’s not an email problem, that’s a code-quality problem.
Content is encoded in exactly one way: MIME.
I can’t think of a single modern SMTP server that doesn’t support STARTTLS.
Fair point there. However I still think the standard is outdated. For example, I don’t see the point in transmitting everything as ASCII – in fact I personally think base64 should die. Anything that adds ~30% overhead to each and every attachment clearly isn’t a sane standard for attachment encoding.
MIME isn’t a single encoding specification, there’s a few different variants (IIRC the biggest being 7bit and 8bit)
I will grant you that the biggest part of this problem isn’t with SMTP server support but more mail hosts (lazy admins) not defaulting to TLS. I can’t recall where I read this, but there’s still a significant amount of e-mails being transmitted between mail servers without any encryption.
I can understand why most of the WWW is unsecured (viewing -for example- BBC News with SSL could be considered overkill), however e-mails often contain personal / confidential information and thus should be encrypted by default.
TLS transmission on SMTP between mail servers really doesn’t make much sense. What’s the purpose of TLS? To add confidentiality and security. Mail servers don’t care about that, the end users do. OpenPGP and S/MIME serve just this purpose and are in wide usage because of it.
It’s analogous to paper mail. If I want to transmit confidential data, I sure as hell don’t trust my mailman and the whole mail delivery chain to keep my secrets. I encrypt my messages at home and all I require the mail service to do is deliver them.
The problem there is that confidential information is frequently transmitted via e-mail. In fact it’s pretty standard for things like Passwords and user IDs to be sent this way. Let alone more confidential data sent by users who don’t understand the protocol.
Furthermore, it would make a great deal more sense to encrypt as standard at the protocol level rather than add another layer of abstraction at the user level
You’re both right, and you’re both wrong.
Keeping messages confidential requires encryption at the endpoints. That means clients need to handle it. Servers are not involved. In my opinion, not only should clients make this easy, it really needs to be available for all files regardless of method of delivery. It doesn’t matter if you e-mailed it to me, I downloaded it from a web site, or you gave me a thumb drive, we need some way to prove who we are and keep the file secret from everyone else.
TLS does have a role in e-mail, and it’s not encryption. TLS provides authentication. Authentication is arguably the largest problem with e-mail. The original protocol simply trusted clients and servers not to lie about who they were, and that’s why we have spam. If our servers only accept mail from servers authenticated with certificates, then blocking spam is easy.
“TLS does have a role in e-mail, and it’s not encryption. TLS provides authentication. Authentication is arguably the largest problem with e-mail.”
I agree that this is one of email’s bigger problems, and though there are solutions, the fact that we cannot depend on them working even a fraction of the time make them somewhat useless. What good is authentication if it doesn’t work reliably with my current contacts? If we allow any exceptions for others to contact us without authentication, then email remains vulnerable. I doubt we’ll see a massive roll out to fix SMTP’s problems. We’re more likely to see people adopt a new service that has security built into it from version 1.
“The original protocol simply trusted clients and servers not to lie about who they were, and that’s why we have spam. If our servers only accept mail from servers authenticated with certificates, then blocking spam is easy.”
I disagree, authentication won’t stop spam. Spam will just be authenticated like any other mail. Not all spammers use forged headers.
The authentication that TLS provides is only of limited use to end-users though.
S/MIME and GPG/PGP is really the only real option today to verify senders and to ensure encrypted end-to-end delivery. It’s really sad that major clients like Thunderbird and Outlook only supports S/MIME out of the box and you have to go through insane hoops, especially with Outlook, to get PGP/GPG working. All we need is realyl there already, it’s just that we’re not using it.
Good point. Thus far I’m yet to see any encryption plugins for Outlook that didn’t completely suck.
Though I’d be interested to know if you guys have found one.
Have you tested gpg4o? It really integrates seamless into Outlook 2010!
Mark
The vast majority of TLS usage with an e-mail system is FOR encryption NOT authentication. Authentication is done via SASL for SMTP or the existing authentication mechanisms with POP3/IMAP.
OpenPGP features file-based encryption and authentication as well (in fact, the e-mail stuff is just a particular application of the signature and encryption algorithms, which work with any digital data).
Who would issue these certificates? How would you check a certificate’s validity exactly? Who would be authenticated by it (sender, recipient, both)? Remember, we’re talking server to server SMTP here, not client to server – that’s a given, STARTTLS has been in use here for years.
Yes, exactly my point! Why then would you trust your post boy (mail server) not to take a peek inside the envelope if it carries sensitive information? That’s actually an argument *for* end-to-end encryption!
Because TLS is necessarily two-way and hop-by-hop. You can’t establish a TLS session via e-mail itself, the round-trip for salt exchange and other protocol setup would be just terrible. That’s why we have things like S/MIME and PGP.
I know it is – I was arguing in favour of encryption the whole time *facepalm*
I’m aware of that, but even just TLS is a huge step up from where we currently are. However I wasn’t saying the encryption method had to be TLS, I just said it should be a requirement in the protocol / specification rather than an addon provided by the client.
At least with enforced TLS, it means that even lazy developers are forced to encrypt communications and it prevents any interception. It “just” doesn’t account for hacked mail relays.
In replying to my argument against using TLS on server-to-server SMTP, you proposed to “encrypt as standard at the protocol level”. Which protocol? What level? As you didn’t point that out, I quite naturally assumed you were talking about TLS or SMTP encryption of some other sort.
That plainly does nothing. The encryption endpoints (i.e. mail servers) don’t care about confidentiality or security. SMTP cares nothing, I repeat, nothing about the format, structure and contents of the mail messages it carries, it’s really just a plain-text message exchange protocol. SMTP != MIME. So before you facepalm, please consider your words somewhat more carefully.
How exactly? What kind of problems does it solve and how does it achieve that? “Encrypt everything” is a cute mantra, but pointless encryption that doesn’t solve any real problem is just a waste of (human and machine) resources.
Again, SMTP != MIME.
How does it prevent interception? All I need to do, as an attacker, is either get access to any of the mail servers in transfer (easy to do if you’re the government or other powerful entity), or simply mandate that all e-mail from my network go through my SMTP servers. (There’s also other possibilities, like MITM somebody by inserting false MX records, etc.)
You don’t have to hack a relay to get into the e-mail relay chain. All you need to do is control the relay in a totally legitimate way. This is common practice in corporate, campus and even service provider networks – they prevent SMTP out and only allow you to go through their servers. Again, this is common practice and legitimate, so any argumentation of the sort “well simply change providers then” would be nonsensical.
It’s even worse in that encrypted SMTP connections only happen between SMTP clients and servers that support it. Meaning, your e-mail client may use TLS to connect to your SMTP server, and your SMTP server may use TLS to connect to the next SMTP server in the chain .. but there’s no guarantee that the next SMTP server will support TLS .. meaning the message goes through unencrypted.
TLS, SASL, and other encryption/authentication methods are really only useful if you control *EVERY* SMTP client and server in the chain. Which really only makes it useful for remote workers connecting in to the corporate mail system to send internal mail.
I like using the “postcard in an envelope” analogy when explaining e-mail to people. It really brings home the point that “anyone handling the message en-route can read it”.
Even worse, most servers default to accept any certificate from other servers regardless of expiration or validity. You will never know if someone does a MITM attack on your connections.
Server-to-server TLS is rather meaning less, really.
Not at all. TLS is essential for the confidentiality of your login credentials when you send and receive mail from the server.
But, is that Microsoft’s implementation of MIME? A non-MS implementation of MIME?
What about those still using “quoted-printable” instead?
And what about those using TNEF instead of MIME?
And is that using MS pseudo-HTML? Or real HTML? Or someone else’s bastardised HTML (like FirstClass mail)?
There’s not “1 single, standard method” for encoding even the text in an e-mail, let alone the “HTML” formatting, or even the attachments.
MS uses standard MIME encapsulated messages.
Tough luck, it’s not 1995 anymore.
TNEF isn’t an alternative to MIME. TNEF is a format for the attachment itself. The message is still multipart MIME. Email does not, and should not, care about the format of the attachment itself. That’s up to the client to handle.
This has nothing to do with the mail system.
There’s a difference between the format of the mail and the format and encoding of the parts.
So, he wants to strip out which server rejected the e-mail, the last server in the chain that transmitted it, the stated reason for rejection, and any information to uniquely identify the e-mail beyond subject. Notably, it leaves off the acceptable attachment size limit, which, IMHO, is a bit useful for resending.
The first suggestion is to compress the file, which, is unlikely to help as it’s an image and he needs 30% compression. The second is a link, which is more of an advertisement than anything. The third assumes the recipient uses iMessage, which is unlikely and will confuse people such as the author when it fails to work.
“User friendly” error messages are nice if they’re useful. Modern software tends to sacrifice the later for the former. It either hides any information that might be useful in fixing the bug, or jumps to inaccurate conclusions so it can hold your hand while it leads you on a wild goose chase. Hiding useful information keeps users ignorant, and prevents them from solving their own problems (while learning in the process).
That said, some people hate to learn. They want the computer to make a 45 MB e-mail go through, rather than learn why it’s a bad idea to e-mail such a monstrosity. The author states “I can barely read that email” when it has three sentences and the specific error as the first line of the debug information. IMHO, rather than save space by eliminating useful information from bounce messages, he could have omitted the last two words of that quote.
In the era of broadband… Mightn’t it be time to change email so that id DOES handle large file transfers?
I’m sorry, but I personally refuse to use crappy cloud solution with separate applications, dodgy ToS, etc. to give somebody a file. Instead of trying to change people – which never works – change the product to adapt to the expectations of your users.
That’s retarded. We’d have people emailing each other 1080p video files and ruining SMTP for everyone else. I don’t think that it’s too much to ask people to use a different protocol to send big files. They’ve only had 30 years to get used to it.
Cloud or not, email is becoming a strain on both the web and the mail servers. (was an atricle here on osnews about more than 40% of traffic on the internet at some time were spam) People just do not read emails any longer (i can back this up with results from SCB ‘Statistics Central Bereau’, allthogh this statistics only apply to queries of swedes.), just as they do not read anything on twitter or facebok that is not in their “circle”. Some bad eggs ruined a fine concept for the rest of humanity just as so many times before. But i would argue it was kind of ruined the moment they had anything but text introduced in e-mail.
And how many of us are not too tired of trying to find “Leg4l dru6s & 0v1agr4” on the web never to find any. =P
The problem with making email handle large attachments itself is that email is designed to be an all-or-nothing system. How would you change it? Make attachments separate? But if you do that, why not use an email service like Whale Mail (apparently now discontinued by Symantec, was a really good service), which is designed for emailing large files to others?
That’s a massive undertaking and it will not happen any time soon, if ever. Someone might come up with an improved alternative though.
Right now it’s time to use the right tool for the job and the right tool for sending gigantic files is not email. Just like how an envelope is not the right way to send a bulky TV.
It’s not as simple as that. The receiving server almost certainly will run some kind of AV scanner on the received message before it gets to the users mailbox. Running AV scans on hundreds of thousands of multi-megabyte emails is no fun, and not a cheap or easy thing to have to do.
That’s not even getting into the size of the mailboxes you’d need to store all that data for the users who use IMAP but don’t ever delete anything.
So if you want to send your 40MB attachment to your mailing list of friends, the network must bare the load of the 40MB times the number of mailed people. No thanks.
If you want to send files typical of a broadband era, you use tools from a broadband era, so clouds and whatsnot (and then you report the URL in your mail).
*sigh*
I don’t think you understand what it means when I say “change email”. I’m not saying “allow larger files to be sent”. I’m thinking of complete rethink of how the technology works – heck, you could integrate smart P2P technology into the whole thing.
Lame backtrack.
Yeah, Google tried it with Wave and failed miserably. E-mail is not a P2P service. For that, use any of the IM services which support P2P file transfers (e.g. XMPP). I don’t want your multiple gigabyte attachments CC’d to three dozen people bogging down my mail server… you want to transfer huge amounts of data, pay for the infrastructure yourself. I have nothing personal against you, Thom, but as a mail admin myself, I know how this would end up being used.
saso,
“I don’t want your multiple gigabyte attachments CC’d to three dozen people bogging down my mail server… you want to transfer huge amounts of data, pay for the infrastructure yourself. I have nothing personal against you, Thom, but as a mail admin myself, I know how this would end up being used.”
Unless I’m mistaken, he doesn’t mean to replace SMTP with another SMTP like infrastructure, but to completely rethink email messaging as a peer to peer content distribution system with support for interpersonal messaging. Only the manifest would be routed through the mail network, but the attachments would be P2P.
People will continue to use legacy jargon such as “I’ve attached a large file to the email I sent you, you should get it soon”.
Under the hood, the recipient client might be notified of a new message and the client could start downloading individual attachments like in bittorrent. Then, and only if it wants to, the receiving end could download the large media files directly from the sender.
This would make it far more efficient than traditional SMTP, especially with large mailing lists or repeated/reforwarded mailings. Obviously this system has slightly different semantics from than when emails are handled by SMTP servers. But the recipient’s server could be configured to automatically download attachments for data retention purposes if they wanted to.
I find people don’t particularly want to use email to transfer files. But email is the least common denominator approach for two arbitrary people to transfer files to each other without owning a server somewhere, it’s a defacto mechanism for file transferring. And it’s a natural way of attaching a message to those files as well.
I think Thom’s onto something, unifying email with P2P would be a big hit if it were adopted by enough people to make it scale.
But that’s just a fancy way of putting a link in the mail, more or less. Perhaps a neat integration of your email client with your cloud-storage. Of course, this poses all kinds of interesting design problems and opportunities for webmail.
It would need to be an out-of-band delivery unless you wanted to redesign the SMTP protocol (which isn’t going to happen).
On the other hand, since Thom doesn’t want to use fancy modern things like cloud storage (that are, you know, actually designed to solve this very problem) maybe he does want to redesign the protocol.
Soulbender,
“But that’s just a fancy way of putting a link in the mail, more or less. Perhaps a neat integration of your email client with your cloud-storage.”
Putting links in emails is only half the problem. I can send my parents files by uploading the files to my web server and emailing them a link. But they cannot send me files this way and neither can most people. Integrating email and P2P in a standard way would be a complete & efficient solution that everyone can use without much difficulty.
“problems and opportunities for webmail.
It would need to be an out-of-band delivery unless you wanted to redesign the SMTP protocol (which isn’t going to happen).”
Oh I know that, even if it could be extended, I don’t have any confidence that changes to SMTP would be widespread enough to make the plan viable. The only way such a system could work is on a new email platform where all clients supported these features from the get-go without worrying about legacy compatibility.
Right. What I mean is that this kind of change does not need a re-design of the email system. It can be implemented entire in the client in a way that is compatible with the existing system and clients. It’s just a matter of making it easy to upload the attachment to somewhere and then put the link to it in the mail.
Soulbender,
“Right. What I mean is that this kind of change does not need a re-design of the email system. It can be implemented entire in the client in a way that is compatible with the existing system and clients. It’s just a matter of making it easy to upload the attachment to somewhere and then put the link to it in the mail.”
Oh, yes of course you could do that, but I see problems with it that way.
1. You face the stark reality that nobody is going to have a P2P client to transfer the large files directly and you’ll be forced to have a server based fallback mode that nearly EVERYONE is going to end up using. This requirement negates the entire P2P concept which in my opinion is the core benefit. You’d end up with something like imgur.com for emailed files with no real integration.
2. People shouldn’t have to ditch their current email clients (and possibly addresses) to install another client that might not be as good at composing (for instance). Many existing clients are unlikely to get native P2P support before popularity kicks in. Think of all the webmail & fat clients that exist.
3. Market share. The market share of the P2P version will be so small that everyone will just assume that noone else has it (because most won’t) and they’ll just revert to attaching large files inline when they send emails.
I’d like to draw a parallel here with PGP’s attempt to bring end to end cryptographic security to the email platform. What they were doing was more important than optimizing file transfers. Most would agree it was a good idea, but they faced a catch 22 that was never overcome – they could not convert users because they had too few users to make conversion worthwhile. Banks never switched because customers weren’t there, customers never switched because friends and businesses weren’t there, businesses weren’t there because clients were not there, etc. There’s nothing to gain by using it alone.
My own speculation is that if PGP created a new email system where all clients on the network were always secure, it could have had a better shot at success than when it tacked PGP ontop of existing protocols and became immediately marginalized. This is admittedly something of a psychological difference, but I think more people/companies would have bought into a small network where everyone can be reached securely than a huge network where only a tiny fraction can be reached securely.
Alfman,
you don’t need to address me by name (though I appreciate), the comments system already tells me which post you are replying to
Try as a might, but I can’t seem to find any hint of what you’re describing in Thom’s original comment. Seems like what he meant was very much open to the imagination of the reader (that is not to say that he doesn’t have a clear picture himself, only that he didn’t specify it).
You’re talking about a glorified integration between traditional MIME e-mail and a P2P or cloud-storage service (perhaps sending magnet links or some such method). That can easily be achieved using current technology. Using direct P2P, however, has a few drawbacks:
1) Currently, e-mail is send-now-read-later. Your proposed modification would break that and would require both machines to be on-line for the entire duration of the download. Also, in an era of ever increasing numbers of mobile devices which are sometimes online and sometimes offline, this requirement can really come back to haunt you.
2) Try CC’ing multiple people with such a message and be prepared to hit your upload bandwidth cap soon.
Sure, some solution can be worked out. Trouble is, getting the rest of industry on board as well. There’s a good reason why we use SMTP: everybody else does.
I envy the place where you work. I routinely get funny videos or stupid pictures BCC’d to dozens of people from some notorious e-mail forwarders… give them the chance and they will send you 1080p full-length video. as a result, I have to regularly vacuum my mailbox since it drives the admins crazy if you have several hundred people with mailboxes filled with several gigabytes of worthless trash…
Been there, done that. Google Wave failed. Compatibility does matter.
saso,
“you don’t need to address me by name (though I appreciate), the comments system already tells me which post you are replying to ;-)”
I find it difficult to tell sometimes because it says “in reply to” but it identifies a thread rather than a person. Occasionally I have to click the parent message to be sure. I once asked osnews if they might be able to display the name for this reason.
“That can easily be achieved using current technology.”
There are no technological impediments in my mind, just standardization ones.
“Using direct P2P, however, has a few drawbacks: 1) Currently, e-mail is send-now-read-later.”
I did think of that however I wanted to keep the post simple. Yes having always on devices is a potential caveat especially when people aren’t working “in sync”, however there’s no reason that mobile devices would have to serve files directly – they might use a delegate which would have bonus uses anyways.
You could argue this is similar to transmitting files to an SMTP server for queuing. But using SMTP servers doesn’t allow me to reuse that data again. Today every email I send needs to exit my network twice, onces through SMTP and again via IMAP for archival. Ideally I’d be able to send files directly from my IMAP store to a recipient without downloading&uploading over and over again. Current SMTP protocols just cannot realize the type of efficiency that ought to be possible.
“2) Try CC’ing multiple people with such a message and be prepared to hit your upload bandwidth cap soon.”
To be perfectly honest I think this is a scenario that P2P mail would particularly excel at. P2P can multiply one’s reach tremendously.
“I routinely get funny videos or stupid pictures BCC’d to dozens of people from some notorious e-mail forwarders… give them the chance and they will send you 1080p full-length video.”
Using the proposed protocol, you’d only receive the manifest, the attachments would be optional. And if it were sophisticated enough you might even be able to avoid downloading parts of the file that you skip over, as with HTTP streaming. This would do alot to unclog messaging servers who’s principal purpose will be fast reliable messaging without file transfer.
“Been there, done that. Google Wave failed. Compatibility does matter.”
Well most of google’s projects fail commercially, but it doesn’t invalidate the ideas. Users would suffer hugely if this was built on top over existing email only to realize that 0% of their contacts can actually use it. In my opinion the only chance this would have is if it were a new network where everyone joining it would support the same baseline features. “You want an easy way to transfer media with folks back home, just install ABC!” Then everyone with ABC installed could use every one of it’s features without worrying about SMTP compatibility issues (webmail users, unsupported servers&clients, locked down email in phones, insecurity, etc).
Edited 2012-04-13 02:00 UTC