How to Get Your Silverlight Pages Indexed By Search Engines

Use the HTML DOM to add dynamic Title, Keywords and Description tags to your Silverlight Pages

There are a lot of articles and tips about "deep linking" and so on with Silverlight.

But, the bottom line is that if you want your Silverlight pages properly indexed by Google and its brethren, you need to be able to dynamically add TITLE, META KEYWORDS and META DESCRIPTION Tags to the hosting page. Here we will show not only how to do this from within a Silverlight application, but also how to do it in response to a search engine crawler.

In a nutshell, search engines follow hyperlinks, and then index any content they find. They collect any additional links they find on the page and crawl those. If you have a Silverlight application that has multiple user controls that respond to user navigation, but they are all hosted in the same page that has a TITLE tag of "My Cool Silverlight Application" and no dynamic META Description or META keywords tags, you really haven't accomplished much from an SEO (search engine optimization) standpoint. This doesn't even take into account the fact that all your "content" may be inside your Silverlight app, which renders it 100% invisible to search bots. You may actually need to consider having some "relevant content" in the HTML page itself to get real search engine action.

The way that your "Page" gets found by a search engine is either you submitted it, it is listed in your sitemap.xml file, or there is a hyperlink to it from somewhere else, perhaps even from another page in your own website. There are other "deep linking" techniques that will allow Silverlight content to be custom-presented based on what is on the QueryString in a unique search-engine optimized url, but I'll save those for another article.

You can add these tags to a page directly from within your Silverlight Application itself with the following code. You can also use this same technique to inject relevant content into the HTML of the page where it will be indexed. The end result will be that your Silverlight "Page" will end up being the target of more search results on Google, Bing, Yahoo and oher search engines, and you can combine this technique with the deep-linking properties of, for example, a Silverlight Navigation app to get much better search engine result rankings.

First, in your App.Xaml.cs codebehind class, add the following static method:

public partial class App : Application
public App()
this.Startup += this.Application_Startup;
this.Exit += this.Application_Exit;
this.UnhandledException += this.Application_UnhandledException;
public static void SetPageInfo (string keywords, string title, string description)
ScriptObjectCollection soc = HtmlPage.Document.GetElementsByTagName("Title");
if (soc != null && soc.Count > 0)
HtmlElement titleElement = (HtmlElement)soc[0];
// string mytitle = (string)titleElement.GetProperty("innerHTML");
HtmlPage.Document.SetProperty("title", title );
HtmlElement headElem = HtmlPage.Document.GetElementsByTagName("head")[0] as HtmlElement;
HtmlElement keys = HtmlPage.Document.CreateElement("meta");
keys.SetAttribute("name=\"keywords\" content", keywords);
HtmlElement desc = HtmlPage.Document.CreateElement("meta");
desc.SetAttribute("name=\"description\" content", description);

NOTE: Apparently Firefox does not like SetAttribute. Use SetProperty instead.

Then in your Main page codebehind, you can call the method like this:

public partial class MainPage : UserControl
public MainPage()
App.SetPageInfo("test,test2,test3","This is the title", "This is the description");

Incidentallly, astute readers might wonder why I am doing this, setting both the name attribute and the content attribute "in one go":

keys.SetAttribute("name=\"keywords\" content", keywords);

The answer is, that's the only way I could get this to work! Otherwise, I would get a <meta keywords="X,Y, Z""> without the name attribute. Go figure....

If your Silverlight Application has multiple pages or uses the Navigation framework, you can do this for each page in your application, setting a different Title, meta keywords and meta description tag dynamically.

If you want the search engines to index your Silverlight content, don't expect them to dig into your XAP file and parse the text - not only is it unlikely they'll do it - but you might not want them to anyways.

Search engines want to see, at a minimum, a Page TITLE element and a META Description element. Give them what they want, and your pages that contain Silverlight content will be correctly indexed!

It irks me to no end to see people with blogs that have excellent multiple content pages, and yet every single one has the same Page Title tag that says, "My Blog"! If you give Google, Bing, Yahoo et. al. garbage - they're going to give you garbage back, and your search results will reflect this.

You MUST have a unique, keyword - dense Page TITLE and META Description tag for every page on your blog or website. Most search engines will use your META Description tag content verbatim. If this tag is missing, they'll come up with their own, and you may not like it.

If you View Source from the browser, you will not see these elements because they have been dynamically added to the page. But by using a little Javascript, you can easily see that they are in the page:


You can copy and paste the above snippet into the address bar of your browser to see this at work.

The downloadable Visual Studio 2008 solution has the above javascript in an A HREF link right in the test page.

Next Stage: Feed the Bots!

Jim McCurdy astutely pointed out that while the above may be useful, it is still not visible to crawlers. The crawler simply downloads all the markup in the page, but the Silverlight plugin is never instantiated. In order for a Silverlight Application to "come to life", it must exist in a live browser DOM. Jim shows a solution using urlrewriting. However, it can be done with an even simpler solution - detection of bots via their User-Agent and writing the custom content directly into the same page.

Let's add a couple of utility methods to our ASP.NET web project's Global.asax codebehind:

public static bool IsBot()
if( HttpContext.Current.Request.UserAgent==null)
return false;
string userAgent = HttpContext.Current.Request.UserAgent.ToLower();
string[] botKeywords = new string[] { "bot", "spider", "google", "yahoo", "search", "crawl", "slurp", "msn", "teoma", "" };
bool isBot = false;
foreach (string bot in botKeywords)
if (userAgent.Contains(bot))
isBot = true;
return isBot;
public static void PlaceMetaTags (string title, string keywords, string description, Page page )
System.Web.UI.HtmlControls.HtmlMeta metaKeywords = new System.Web.UI.HtmlControls.HtmlMeta();
System.Web.UI.HtmlControls.HtmlMeta metaDescription = new System.Web.UI.HtmlControls.HtmlMeta();
page.Title = title;
metaKeywords.Name = "Keywords";
metaKeywords.Content = keywords;
metaDescription.Name = "Description";
metaDescription.Content = description;

"IsBot" simply compares the User-Agent string to a known list of keywords, and returns true or false. And "PlaceMetaTags" provides a convenient way to write a custom title, meta - keywords and meta -description tag. Both of these methods, since they are static and public, will be availalble from any page in your web site via "Global.MethodName..."

So to test this, I've constructed a "test page" that uses the WebClient class to request our Silverlight page after setting a "fake" GoogleBot User-Agent:

WebClient wc = new WebClient();
wc.Headers.Add("user-agent", "Googlebot/2.1 (+");
string s = wc.DownloadString("http://localhost:4403/Default.aspx");
this.txtBox1.Text = s; // show the crawled HTML

what will happen now if you run the "WebForm1.aspx" page in the sample solution is that the page detects that the request is from a "bot", and you will see the crawled HTML in the textBox with:

<head id="Head1">
<title> Title for Bots!</title>
<meta name="Description" content="The description I want bots to see" /><meta name="Keywords" content="BotKeyword1, botkeyword2" />

And all is good. We've accomplished two goals: Be able to dynamically write Title and Meta tags from within a Silverlight application (useful when the page is viewed in a browser, as in "real life"), and also the ability to detect when a search engine crawler has requested the page, and provide the crawler with meaningful tags to index. Both of these approaches represent useful tools you'll definitely want to include in your Silverlight Application "toolkit".

You can download the complete solution here.

By Peter Bromberg   Popularity  (8180 Views)