Google has quietly axed the web services API to its eponymous search engine. The stealth move was made without any announcement, but visitors to the page now receive a blunt message, backdated to 5 December, advising them that the SOAP API is no longer supported.
lets hope this is a prelude to a new api that is even more feature rich!
if not this will have left alot of developers out in the cold with nowhere to go..
Yes, they’ve replaced SOAP with AJAX: “The AJAX Search API is better suited for search-based web applications and supports additional features like Video, News, Maps, and Blog search results,” Google said.
But there appears to be restrictions in terms of usage–basically you need to take the results given by Google as-is without reordering or modifications.
“basically you need to take the results given by Google as-is without reordering or modifications.”
I seriously doubt that is legally enforceable.
This isn’t really much of a big deal. Has anyone here actually USED SOAP? I have, and I can tell you it’s better off dead. The day that people started playing around with AJAX, SOAP died. Piece of trivia…SOAP was actually a big part (if not the core) of the .NET dream, it just so happened that it fell on its face.
You do realize that SOAP and AJAX are entirely unrelated, and from a technical standpoint not even in the same realm, right?
SOAP was a very minimal part of the .Net “dream”…it was nothing more than one of the payload mechanisms for serializing objects across the wire. XML as a whole is a more integral part, SOAP just happens to be a protocol that sits on top of XML.
SOAP is far from dead, it just seems everyone wants to use the word AJAX wherever they possibly can, even though AJAX technology predates even the SOAP spec.
“…You do realize that SOAP and AJAX are entirely unrelated,…”
This is not true, they are related. It my country AJAX is a cleaning detergent, aka SOAP. Both used in my house and many others.
Modded up, quite funny!
OK, maybe I’m missing something here. Everyone is saying that SOAP and AJAX are totally unrelated. I don’t see how you can say that. I understand that AJAX is not a clearly defined spec (well, not a spec at all) in the way that SOAP is, however AJAX and SOAP are basically ways of accomplishing RPC over HTTP using XML. Am I missing something?
And as far as SOAP being a minimal part of the .NET dream, I certainly disagree with this. During the early days of the .NET hype everything was about Passport, and Microsoft’s web services. Microsoft was aiming to make themselves the center of the web-services hub (basically, exactly what Google is *doing*) Am I taking crazy pills here? Am I the only person that remembers this hype? In all of the early “.NET is going to take over computing” hype, the CLR was mentioned in the same breath as web services and SOAP.
When I tried my hand at SOAP (and it was a few years ago) There was a fair amount of overhead on the server to get everything working properly. Basically, it was a pain in the ass. I had all of this overhead and rigid structure just so I could ask my server a simple question.
I don’t see the point in bothering with all of that crap when I can just use an AJAX solution. Apparently, neither does Google. And yes, I know AJAX is just a buzzword, but if you have another non-hyped way to say “Use the XMLHTTPRequest object to make a simple call back to the server and get an XML formatted response” then I’ll use that term instead of AJAX.
SOAP is widely used. .Net webservices, Java web services all based on SOAP. A work for a company that frequently needs to exchange data with other companies. Curently we run more then a couple dozen web services exchanging data all back in fourth with .net, or java web services. Also java used soap long before it was used in .Net. and web services is just a small piece of what is in .Net. SOAP has nothing to do with ajax either. Two completely different technologies
I use SOAP frequently to communicate between Flash (Actionscript) and a .NET server. I wouldn’t trade it for any other transport, unless it is equally as transparent (maybe Flash Remoting at some point). And by transparent I mean, I never have to look at the XML (even though we did manage to make the XML quite human readable) – all serialization is done automatically at both ends.
Another reason to hate web services, fickle providers.
I guess Google is sorry they ever gave us access and now wants to take it back. I would take this as proof that they cannot be trusted to NOT change the Ajax API later.
Time to look elsewhere… (Google is over-rated anyway. Many other engines return mostly identical results.)
I haven’t used the SOAP API, but I imagine that Google did not return any ads with the SOAP response, nor could they enforce displaying any ads or enforce result ordering.
With the AJAX API (Its more a JavaScript library) they eliminated all these issues. You get to embed the results in your site with some customization, thus keeping the control in Google hands.
You may be right about the purpose for this change, but there is no technical reason that an AJAX API results could not be tweaked and changed. It seems they will enforce this with some kind of license agreement, which means they could cut you off if you broke the agreement. So it is much more enforceable than license agreements on software installed on a PC.
And what if I don’t want to embed the results in webpage but use them in a fat client (with no access to embeddable web browser)? I guess I’m f*cked then:)
how so?
It can be done. You can use HTTP protocol to send the request and to receive results. If there is no HTTP library available, you can use sockets.
Parsing results might be done with some HTML parsin library, or you can write you own solution. Google results are not that complicated HTML code.
DG
Parsing html results – done that with several sites. The trouble is, when such site knows it, and changes the output intentionally
I use REBOL for the task, it is cool and simple to parse results with it …, but of course and imo – it is a pity API is removed …
-pekr-
I don’t know much about REBOL. I use Java, Python or Groovy. My approach is to create a general class for parsing, and subclass for the particular version of the site output.
DG
> It can be done. You can use HTTP protocol to send the request and to receive results. If there is no HTTP library available, you can use sockets.
> Parsing results might be done with some HTML parsin library, or you can write you own solution. Google results are not that complicated HTML code.
Indeed, that’s what I did in googlefs (see http://haikunews.org/953 ).
I planned on adding a SOAP backend in addition of the hand-crafted html parser, cause the html result tends to change by time… but it required registration, and was limited to 1000 times 10 results
Now I don’t see myself including a javascript interpreter in the kernel just to run AJAX code to get the results! XML would already be too much. (yes it can be compiled as a userland module too, but hey where’s the fun in that ?)
If anyone wants the parser code I could publish it, it was a quick hack but should still work.
I don’t understand what you mean…Why would you have to deal with Javascript at all? Even with the “AJAX” API, wouldn’t requests just be XML, and responses also be XML? The J in Ajax is the client side Javascript manipulation of the DOM, not any sort of transport. I haven’t looked at Google’s AJAX API, so maybe they’re doing something wacky, but it would seem to me that you shouldn’t have any more problem communicating with it from plain old C than you did with the other API.
This really relates to the earlier item on Web 2.0 (forgive the technical ignorance if it ain’t) and its viability, and also the viability of stepping up to the network and new ways of describing and providing material from library ‘silos’ by vendors. Not long ago Google also axed Google Answers – is this the way it’s going to be? New Web 2.0 services are increasingly ephemeral as the major commercial players switch tracks as the revenue stream needs a different way of being channeled over time? Who says the library and its ‘traditional’ way of ordering and classifying information is dead? I think what some need to remember is that certain universities and their libraries (Oxbridge) have been around longer than most nation states, and certainly longer than the likes of Google. It may now be a better option for libraries to open up their OPACs to the Web via LibX rather than through Google Scholar…
I have been able to do automated Google search without API. I just used HTPP protocol. Of course, my applications will need to change when Google modifies query syntax and/or result format.
I think that SOAP is a bit too much, like an elephant rifle used to hunt a fly. Maybe just simple XML over HTTP, without SOAP complexity.
DG
“””
I think that SOAP is a bit too much, like an elephant rifle used to hunt a fly. Maybe just simple XML over HTTP, without SOAP complexity.
“””
While I agree that is a step in the right direction, it’s still swatting flies with a telephone pole.
Why not just use the light weight, no nonsense, no muss, no fuss, JSON?
That’s what it’s there for.
http://en.wikipedia.org/wiki/Json
Considering that people seem to value scandal more than facts, why not use a heading such as “Google Shuts Down Searching!!”?
Come on, they’ve deprecated one API.
Edited 2006-12-20 13:56
Is there anything wrong with issuing a GET on the server side to the AJAX API? That work’s right?
If that’s the case, then I’d just say Google has deprecated SOAP in favor of REST (which would be a more accurate description of the API than AJAX, if you can indeed do GETs to it from the server as well).