Posts Tagged ‘Google’

Google still dominates advertising but Facebook is on the rise

Google still dominates advertising but Facebook is on the rise

by Ciara Byrne

Google has been criticized lately for a decline in the quality of its search results, but that hasn’t stopped it from continuing to dominate the market in search advertising with 83 percent market share, according to a new research report from IHS Screen Digest. Its only real rival is likely to come from social media.

The report estimates Google’s market share of search advertising at 83 percent in 2010, up from 81 percent in 2009. According to the report, Google’s full-year search advertising revenue in 2010 amounted to $25.4 billion, an increase of 20.2 percent from $21.1 billion in 2009. Google’s official earnings results for the fourth quarter of 2010 will be announced tomorrow.

Google’s revenue growth was even stronger in display and mobile advertising. Display revenue increased by an estimated 61 percent during 2010, boosted by the success of Google’s subsidiaries YouTube and DoubleClick. On the mobile ad side, Google benefited from the increasing popularity of the Android operating system and the AdMob acquisition.

In 2010, Google faced the first major challenge to its search business in many years with the launch of Microsoft’s Bing. However, Google has so far lost little or no ground. Bing grew mostly at the expense of its partner Yahoo.

The report predicts that the only real threat to Google is social media, rather than competing search engines. Facebook’s global advertising revenues were estimated at $1.2 billion for the first nine months of 2010. By providing a similar scale, low cost and more focused targeting to advertisers, social advertising could become a viable alternative to both search engine and display advertising. Bing already has a partnership with Facebook where Bing highlights search results endorsed by your Facebook friends and multiple startups are also working on social search.

While Google remains the undisputed leader in most major markets, there are some notable exceptions including South Korea, Russia and, most importantly, China. In these markets, the dominant search engines belong to local operators, NHN, Yandex and Baidu respectively. After its dispute with the Chinese government in the first half of the year, Google lost significant market share to Baidu in 2010, but it finally decided to remain in a search market that is already worth $1.6 billion and growing at an impressive 60 percent in 2010.

IHS Screen Digest expects Google’s total revenues to have reached $28.9 billion in 2010, a rise of 22.5 percent from 2009.

EU is probing Google for three formal complaints

EU is probing Google for three formal complaints

by Mike Butcher

The European Commission has launched an investigation into Google after three vertical search engines submitted formal complaints that the firm had use its dominant position to crowd out and disappear results from these engines – as reports various outlets including Bloomberg and the BBC.

The EU is obliged to look into whether Google has purposely lowered the search rankings of price comparison sites Foundem (UK) and Ciao (owned by the Microsoft’s Bing), and French legal search engine ejustice.fr in its results.

The EU investigatation will also take in Google’s ad platform, which covers Google’s unpaid and sponsored search results and “an alleged preferential placement of Google’s own services.”

We’re going to take a look at what all this means.

The European Commission will look at whether Google “imposes exclusivity obligations on advertising partners, preventing them from placing certain types of competing ads on their websites, as well as on computer and software vendors, with the aim of shutting out competing search tools.”

Specifically, whether Google has:
• has abused a dominant market position in online search by lowering the ranking of unpaid search results of competing services;
• accords preferential placement to the results of its own vertical search services and in so doing shuts out competing services; and
• lowered the ‘Quality Score’ for sponsored links of competing vertical search services, the Quality Score being one of the factors that determines the price paid to Google by advertisers.

The backround to this is that Ciao, Foundem and Ejustice.fr filed an antitrust complaint against Google in back in February. This is separate from any other EU probe Google’s has had over its StreetView service – that’s about privacy.

Google’s defending statement is that it has marked ads properly and will be “working with the Commission to address any concerns.” Of course, it has to say that. In private, our sources say Google dealt with a lot of these issues some time ago – and indeed it is under an obligation to keep out spammy links from lame ‘shopping’ search engines.

The matter is particularly sensitive for European startup search providers since there is often a view among them that Google feels it can get away with doing some things in small European markets, away from the harsh gaze of SIlicon Valley’s media.

Foundem’s view is that Google is “stifling innovation” and “should not be allowed to discriminate in favor of its own services” and should label its own services in search results.

Indeed, there is even a European trade organisation called ICOMP (Initiative for a Competitive Online Marketplace). Their legal council, David Wood, welcomes the probe as a “thorough investigation is necessary to determine the workings of Google’s black box.”

Now, although the EU can impose a fine of up to 10 percent of revenue for monopoly abuses (the EU’s took $1.38 billion from Intel Corp. last year) the likelihood of this investigation a) coming down hard on Google or b) levying a fine is pretty unlikely and even if this happened, the fine would probably be restricted to its European markets. For instance, in September the Commission closed an investigation into Apple after it introduced cross-border iPhone warranty repair services within the EU. It shows that the Commission is, in practice, often prepared to play ball and horse trade.

But the fact its launched this investigation does flag that Google will have to step up to the plate and answer the charges in a formal and legal manner.

Source: Techcrunch.com

Dealing with Crawlers – Make effective use of robots.txt

Prepared by Mohammad Jubran

A “robots.txt” file tells search engines whether they can access and therefore crawl parts of your site. This file, which must be named “robots.txt”, is placed in the root directory of your site .

You may not want certain pages of your site crawled because they might not be useful to users if found in a search engine’s search results. If you do want to prevent search engines from crawling your pages, Google Webmaster Tools has a friendly robots.txt generator to help you create this file. Note that if your site uses subdomains and you wish to have certain pages not crawled on a particular subdomain, you’ll have to create a separate robots.txt file for that subdomain. For more information on robots.txt, we suggest this Webmaster Help Center guide on using robots.txt files

There are a handful of other ways to prevent content appearing in search results, such as adding “NOINDEX” to your robots meta tag, using .htaccess to password protect directories, and using Google Webmaster Tools to remove content that has already been crawled. Google engineer Matt Cutts walks through the caveats of each URL blocking method in a helpful video.

User-agent: * Disallow: /images/ Disallow: /search

(1) All compliant search engine bots (denoted by the wildcard * symbol) shouldn’t access and crawl the content under /images/ or any URL whose path begins with /search.

(2) The address of our robots.txt file. 

Keep a firm grasp on managing exactly what information you do and don’t want being crawled!

Best Practices

Use more secure methods for sensitive content

You shouldn’t feel comfortable using robots.txt to block sensitive or confidential material. One reason is that search engines could still reference the URLs you block (showing just the URL, no title or snippet) if there happen to be links to those URLs somewhere on the Internet (like referrer logs). Also, non-compliant or rogue search engines that don’t acknowledge the Robots Exclusion Standard could disobey the instructions of your robots.txt. Finally, a curious user could examine the directories or subdirectories in your robots.txt file and guess the URL of the content that you don’t want seen. Encrypting the content or password-protecting it with .htaccess are more secure alternatives.

Avoid:allowing search result-like pages to be crawled- users dislike leaving one search result page and landing on another search result page that doesn’t add significant value for them

allowing URLs created as a result of proxy services to be crawled

 

Robots Exclusion Standard A convention to prevent cooperating web spiders/crawlers, such as Googlebot, from accessing all or part of a website which is otherwise publicly viewable. Links robots.txt generator  http://googlewebmastercentral.blogspot.com/2008/03/speaking-language-of-robots.html 
Proxy service A computer that substitutes the connection in cases where an internal network and external network are connecting, or software that possesses a function for this purpose. Using robots.txt files  http://www.google.com/support/webmasters/bin/answer.py?answer=156449
Caveats of each URL blocking method  http://googlewebmastercentral.blogspot.com/2008/01/remove-your-content-from-google.html

 

Dealing with Crawlers

Google Bot - Crawler

Be aware of rel=”nofollow” for links

Combat comment spam with “nofollow”

Setting the value of the “rel” attribute of a link to “nofollow” will tell Google that certain links on your site shouldn’t be followed or pass your page’s reputation to the pages linked to. Nofollowing a link is adding rel=”nofollow” inside of the link’s anchor tag (1).

When would this be useful? If your site has a blog with public commenting turned on, links within those comments could pass your reputation to pages that you may not be comfortable vouching for. Blog comment areas on pages are highly susceptible to comment spam (2). Nofollowing these user-added links ensures that you’re not giving your page’s hard-earned reputation to a spammy site.

Automatically add “nofollow” to comment columns and message boards

Many blogging software packages automatically nofollow user comments, but those that don’t can most likely be manually edited to do this. This advice also goes for other areas of your site that may involve user-generated content, such as guestbooks, forums, shout-boards, referrer listings, etc. If you’re willing to vouch for links added by third parties (e.g. if a commenter is trusted on your site), then there’s no need to use nofollow on links; however, linking to sites that Google considers spammy can affect the reputation of your own site. The Webmaster Help Center has more tips on avoiding comment spam , like using CAPTCHAs and turning on comment moderation (3).

(1) If you or your site’s users link to a site that you don’t trust and/or you don’t want to pass your site’s reputation, use nofollow.

<a href=”http://www.shadyseo.com” rel=”nofollow”>Comment spammer</a>

 

(2) A comment spammer leaves a message on one of our blogs posts, hoping to get some of our site’s reputation.

(3) An example of a CAPTCHA used on Google’s blog service, Blogger. It can present a challenge to try to ensure an actual person is leaving the comment.

Glossary

Comment spamming Refers to indiscriminate postings, on blog comment columns or message boards, of advertisements, etc. that bear no connection to the contents of said pages. CAPTCHA Completely Automated Public Turing test to tell Computers and Humans Apart.

About using “nofollow” for individual contents, whole pages, etc.

Another use of nofollow is when you’re writing content and wish to reference a website, but don’t want to pass your reputation on to it. For example, imagine that you’re writing a blog post on the topic of comment spamming and you want to call out a site that recently comment spammed your blog. You want to warn others of the site, so you include the link to it in your content; however, you certainly don’t want to give the site some of your reputation from your link. This would be a good time to use nofollow.

Lastly, if you’re interested in nofollowing all of the links on a page, you can use “nofollow” in your robots meta tag, which is placed inside the <head> tag of that page’s HTML (4). The Webmaster Central Blog provides a helpful post on using the robots meta tag . This method is written as <meta content=”nofollow”>.

<html><head><title>Brandon’s Baseball Cards – Buy Cards, Baseball News, Card Prices</title>

<meta content=”Brandon’s Baseball Cards provides a large selection of vintage and modern baseball cards for sale. We also offer daily baseball news and events in”>

<meta content=”nofollow”>

</head>

<body>

Make sure you have solid measures in place to deal with comment spam!

(4) This nofollows all of the links on a page.

Links 

Avoiding comment spam  http://www.google.com/support/webmasters/bin/answer.py?answer=81749Using the robots meta tag  

http://googlewebmastercentral.blogspot.com/2007/03/using-robots-meta-tag.html

 Source: Google SEO Guidelines

New Blackberry phones on sale | Thanks to Business Opportunity, Highest CD Rates and Registry Software