Twitter Search API with jQuery and JSONP

The jQuery javascript library offers many advantages to developers who dislike writing complex javascript code. The whole idea behind jQuery is "unobtrusive javascript" that provides simple methods developers can use to perform the most often used and most needed functionality, and to be able to do that via a compact "chained" API (similar to a Fluent API) where methods can be easily chained together, one right after the other, to achieve desired outcomes.

My article on JSONP   seems to be well-received, so I thought it would be a good idea to offer something new and amplify on the concept.

One of the most powerful features of jQuery is its load and $ajax functions. Most of these functions are wrappers over the $ajax functionality. For example, the load function can be used directly off a reference to a DOM element:

$('div#results').load('search.aspx', 'q=jQuery&maxResults=20');

The second parameter passed to "load" is called data. If you pass in a string (as above), jQuery will execute a GET request. However, if you instead pass an object containing your data, it will perform a POST request.

You can do processing when a request has completed by supplying a callback function. The callback function takes three parameters: the response text (the returned data), a string containing the status of the response, and the full response object itself:

$('div#result').load('search..aspx', function(data, status, response) {

// Post-processing here.

});

Here is a sample page I put together that uses the Twitter JSON Search API to retrieve tweets matching a user-entered search term or "tag":

<HTML>
<HEAD>
<TITLE>Tweet Search</TITLE>

<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js"></script>
<script type="text/javascript">    
   $(document).ready(function() // don't do anything until the document is loaded.
   {    
   $("#submit").click(function(event){ // wire up this as an onclick event to the submit button.
var searchTerm = $("#search").val()  ; // get the user-entered search term
var baseUrl = "http://search.twitter.com/search.json?q=";
$.getJSON(baseUrl + searchTerm + "&rpp=1500&callback=?", function(data) // call getJSON providing the complete url with search term and a JSONP callback
{
$("#tweets").empty(); // clear out any previous results.
if(data.results.length < 1) // friendly "no results" message recommended by Robbe Morris :-)
  $('#tweets').html("No results. Nada. Nuttin. Zippo.");
$.each(data.results, function() // iterate over the results, constructing the HTML for the display.
{
$('<div align="justify"></div>')
.hide()
.append('<img src="' + this.profile_image_url + '" width="80px" /> ')
.append('<span><a href="http://www.twitter.com/'
+ this.from_user + '">
' + this.from_user
+ '</a> ' + makeLink(this.text) + '</span>')
.appendTo('#tweets') // append the constructed results HTML to the tweets div
.fadeIn(2000); // fade in the results over 2 seconds.
});
});
});
});

function makeLink(text) // this REGEX converts http(s) links that are embedded in the tweet text into real hyperlinks.
{  
var exp = /(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/ig;  
return text.replace(exp,"<a href='$1'>$1</a>");
}
</script>
</HEAD>
<BODY style="margin-left:20%;margin-right:20%">
<div align="center">
<h2>Twitter tag search</h2>
<div>Enter Search Term</div>
<input type="text" id=search />
<input type="button" id=submit value="Search" />
<DIV id="tweets" />  
</div>

</BODY>
</HTML>


At the top, in the <HEAD> we have a <script> tag whose src property points to the google CDN Url for jQuery 1.4.4. There are several reasons why you want to use a CDN rather than loading the script from your application's file system:

Lower Latency
A CDN — (Content Delivery Network) — distributes your static content across servers in diverse physical locations. When a user’s browser resolves the URL for these files, their download will automatically target the closest available server in the network. This means is that any users not physically near your server will be able to download jQuery faster than if you made them download it from your server.

More Parallelism
Browsers limit the number of connections that can be made simultaneously. This limit can be as low as two connections per hostname. Using the CDN eliminates one request to your site, allowing more of your content to download in parallel.

Better Caching
Another big benefit of using the CDN to deliver jQuery is that your users may not need to download it at all. When a browser sees multiple subsequent requests for the same CDN hosted version of jQuery, it understands that these requests are for the same file. Not only will the CDN's server return a 304 'Not Modified' response if the file is requested again, but it also instructs the browser to cache the file for up to a year. So the odds are that the user already has the script in their browser's cache.

If you are using Visual Studio 2010, you'll want to initially point your jQuery script tag to your local Scripts folder where you have deposited the latest jQuery script along with the vsDoc file, which provides Intellisense for jQuery. You can get those here. But for production, you'll definitely want to use a CDN url.

In the second script above, we have the "guts" of the system. We use the $(document).ready wrapper function to ensure that nothing is done until the page is loaded. Following that, I've added another wrapper that attaches a button click event to the submit button for any further processing. Finally, we get the user-entered search term and construct the full RESTful Url for the search, which includes the base Url, the search term, a rpp (rows per page, 1500 is the max), and a JSONP callback.

The JSONP callback, ?&callback=?", causes jQuery to automatically wire up a unique method name for a callback function. The Server API understands this and wraps the entire JSON data stream in a function call with the same name before sending it back. What jQuery does is to dynamically create a new <script> tag, appending it to the <HEAD> of the document, so that when the wrapped JSON Data comes back and is attached to the new tag,  it automatically executes the callback function, completely avoiding the "same origin" policy that would have made a regular AJAX call fail. So with JSONP, jQuery doesn't use the XMLHTTPRequest object at all - it's all done with dynamic javascript, which can be loaded from anywhere.

The last phase of the script clears out any previous results with " $("#tweets").empty(); ", then iterates over the results, constructing the HTML for the display. Note how the various function calls can be chained by simply adding a "." and the next function to call.

The makeLink function is used to REGEX parse the tweet text and turn any http(s) urls embedded into it into real <a href=...  hyperlinks..


You can try out a live demo of this here:

I also did another of these that implements "endless scrolling" for paging, which you can view here:
It really was not too difficult. First, I added some global variables:

var pageSize = 8;
var currentPage = 1;

Then, nested into the main method, this:

  $("#tweets").scroll(function () {              
                 // check if we're at the bottom of the scrollcontainer
                 if ($(this)[0].scrollHeight - $(this).scrollTop() == $(this).outerHeight())
                 {
                     // If we're at the bottom, retrieve the next page
                     currentPage++;
                     $("#submit").click();
                 }
            
             });

And of course, we need to apply some CSS to the "tweets" DIV:

#tweets { position: absolute; left: 186px; top: 105px; width: 376px; height:550px; overflow:auto;  }
#tweets p { font-size: 14px; margin-bottom: 10px; padding: 10px; color: #7a8a99; }
#tweets p a { padding-left:2px; }
#tweets p a img { border:none; }

The result is that when you scroll to the bottom of the container div, the .scroll function checks if you're at the bottom. If so, it simply increments the currentPage variable and makes the submit button click, retrieving the next "set" of tweets from your search.

The search url is improved to add the required items:
   $.getJSON(baseUrl + searchTerm + "&callback=?" + "&count=" + pageSize + "&page=" + currentPage, function (data) { ...

Finally, I have a third example page that is substantially identical to the Twitter JSONP endless scrolling demo above, but instead it uses the Topsy Otter API "Experts" query. Use this to search for Twitter "Experts" based on a keyword or phrase. You can view that here.

Take the time to learn jQuery and the many ways to incorporate it into your web applications. There are numerous resources to help you, and several good books on jQuery as well.

By Peter Bromberg   Popularity  (5322 Views)