TOP LEVEL SEO

Tips and Trick How to get Top level in Search engine,

 
Welcome
Adsense Tips and Trick
here a lot of tips and trick that i use to manage my adsense blog. you can find more tips how to get more traffics for your blog and begin your adsense empire.Read More
Another SEO Tricks
Google Suggestion
"Get Bored", try to take A Vacation
Travel & Leisure Magazine's 11th Annual World's Best Awards 2007 voted Bali as the Best Island Resort in the World and in Asia. Not all vacation in bali cost much money, there are several places you can visit with low cost,or even you want to round bali with a motorcylce, you can do that. you even don't need a guide for traveling in bali, Read More


Keywords
Sunday, November 16, 2008
Search engines rank web pages according to the software’s understanding of the web page’s relevancy to the term being searched. To determine relevancy, each search engine follows its own group of rules. The most important rules are

• The location of keywords on your web page; and
• How often those keywords appear on the page (the frequency)

For example, if the keyword appears in the title of the page, then it would be considered to be far more relevant than the keyword appearing in the text at the bottom of the page. Search engines consider keywords to be more relevant if they appear sooner on the page (like in the headline) rather than later. The idea is that you’ll be putting the most important words – the ones that really have the relevant information – on the page first. Search engines also consider the frequency with which keywords appear. The frequency is usually determined by how often the keywords are used out of all the words on a page. If the keyword is used 4 times out of 100 words, the frequency would be 4%. Of course, you can now develop the perfect relevant page with one keyword at 100% frequency - just put a single word on the page and make it the title of the page as well. Unfortunately, the search engines don’t make things that simple. While all search engines do follow the same basic rules of relevancy, location and frequency, each search engine has its own special way of determining rankings. To make things more interesting, the search engines change the rules from time to time so that the rankings change even if the web pages have remained the same. One method of determining relevancy used by some search engines (like HotBot and Infoseek), but not others (like Lycos), is the Meta tags. Meta tags are hidden HTML codes that provide the search engine spiders with potentially important information like the page description and the page keywords.

posted by Avans @ Sunday, November 16, 2008   0 comments
Indexing,Meta Tags
Most search engines claim that they index the full visible body text of a page. In a subsequent section, we explain the key considerations to ensure that indexing of your web pages improves relevance during search. The combined understanding of the indexing and the page-ranking process will lead to developing the right strategies.

Meta Tags

The Meta tags ‘Description’ and ‘Keywords’ have a vital role as they are indexed in a specific way. Some of the top search engines do not index the keywords that they consider spam. They will also not index certain ‘stop words’ (commonly used words such as ‘a’ or ‘the’ or ‘of’) so as to save space or speed up the process. Images are obviously not indexed, but image descriptions or Alt text or “text within comments” is included in the index by some search engines.The search engine software or program is the final part. When a person requests a search on a keyword or phrase, the search engine software searches the index for relevant information. The software then provides a report back to the searcher with the most relevant web pages listed first.
The algorithm-based processes used to determine ranking of results are discussed in greater detail later. These directories compile listings of websites into specific industry and subject categories and they usually carry a short description about the website. Inclusion in directories is a human task and requires submission to the directory producers. Visitors and researchers over the net quite often use these directories to locate relevant sites and information sources. Thus
directories assist in structured search. Another important reason is that crawler engines quite often find websites to crawl through their listing and links in directories. Yahoo and The Open
Directory are amongst the largest and most well known directories. Lycos is an example of a site that pioneered the search engine but shifted to the Directory model depending on AlltheWeb.com for its listings.
posted by Avans @ Sunday, November 16, 2008   0 comments
Spidering
A search engine robot’s action is called spidering, as it resembles the multiple legged spiders. The spider’s job is to go to a web page, read the contents, connect to any other pages on that web site through links, and bring back the information. From one page it will travel to several pages and this proliferation follows several parallel and nested paths simultaneously. Spiders frequent the site at some interval, may be a month to a few months, and re-index the pages. This way any changes that may have occurred in your pages could also be reflected in the index. The spiders automatically visit your web pages and create their listings. An important aspect is to study what factors promote “deep crawl” – the depth to which the spider will go into your website from the page it first visited.Listing (submitting or registering) with a search engine is a step that could accelerate and increase the chances of that engine “spidering” your pages. The spider’s movement across web pages stores those pages in its memory, but the key action is in indexing. The index is a huge database containing all the information brought back by the spider. The index is constantly being updated as the spider collects more information. The entire page is not
indexed and the searching and page-ranking algorithm is applied only to the index that has been created.
posted by Avans @ Sunday, November 16, 2008   0 comments
SpamDexing
One of the things that a search engine algorithm scans for is the frequency and location of keywords on a web page, but it can also detect artificial keyword stuffing or spamdexing. Then the algorithms analyze the way that pages link to other pages in the Web. By checking how pages link to each other, an engine can both determine what a page is about, if the keywords of the linked pages are similar to the keywords on the original page. Most of the top-ranked search engines are crawler based search engines while some may be based on human compiled directories. The people behind the search engines want the same thing every webmaster wants - traffic to their site. Since their content is mainly links to other sites, the thing for them to do is to make their search engine bring up the most relevant sites to the search query, and to display the best of these results first. In order to accomplish this, they use a complex set of rules called algorithms. When a search query is submitted at a search engine, sites are determined to be relevant or not relevant to the search query according to these algorithms, and then ranked in the order it calculates from these algorithms to be the best matches first. Search engines keep their algorithms secret and change them often in order to prevent webmasters from manipulating their databases and dominating search results. They also want to provide new sites at the top of the search results on a regular basis rather than always having the same old sites show up month after month. An important difference to realize is that search engines and directories are not the same. Search engines use a spider to "crawl" the web and the web sites they find, as well as submitted sites. As they crawl the web, they gather the information that is used by their algorithms in order to rank your site. Directories rely on submissions from webmasters, with live humans viewing your site to determine if it will be accepted. If accepted, directories often rank sites in alphanumeric order, with paid listings sometimes on top. Some search engines also place paid listings at the top, so it's not always possible to get a ranking in the top three or more places unless you're willing to pay for it. Let us now look at a more detailed explanation on how Search Engines work. Crawler based search engines are primarily composed of three parts.
posted by Avans @ Sunday, November 16, 2008   0 comments
Directories, Search Engines & Traffic…
Want to know the difference between a search engine and a directories and why each is so important to your success? This section will give you all of that information and more. You would be using search engines so you know how they work from the user perspective. From your own experience as a user, you also know that only those results that list at the top of the heap are most likely to attract you. It doesn’t amuse you to know that your search yielded 44316 results. Perhaps even number 50 on your list will not get your custom or even your attention. Thus you know that getting listed on the top or as near to the top is crucial. Since most of the search engine traffic is free, you’ll usually find it worth your time to learn a few tricks to maximize the results from your time and effort. In the next section, you will see how search engine works – from your perspective as a website owner.

Web Crawlers

It is the search engines that finally bring your website to the notice of the prospective customers. Hence it is better to know how these search engines actually work and how they present information to the customer initiating a search. There are basically two types of search engines. The first is by robots called crawlers or spiders. Search Engines use spiders to index websites. When you submit your website pages to a search engine by completing their required submission
page, the search engine spider will index your entire site. A ‘spider’ is an automated program that is run by the search engine system. A Spider visits a web site, reads the content on the actual site, the site's Meta tags and also follow the links that the site connects to. The spider then
returns all that information back to a central depository, where the data is indexed. It will visit each link you have on your website and index those sites as well. Some spiders will only index a certain number of pages on your site, so don’t create a site with 500 pages! The spider will periodically return to the sites to check for any information that has changed. The frequency with which this happens is determined by the moderators of the search engine. A spider is almost like a book where it contains the table of contents, the actual content and the links and references for all the websites it finds during its search, and it may index up to a million pages a day. Example: Excite, Lycos, AltaVista and Google. When you ask a search engine to locate information, it is actually searching through the index which it has created and not actually searching the Web. Different search engines produce different rankings because not every search engine uses the same algorithm to search through the indices.
posted by Avans @ Sunday, November 16, 2008   0 comments
About Me

Name: Avans
Home:
About Me:
See my complete profile
Previous Post
Archives
Links
Exchange Links

I heart FeedBurner

Free advertising

Top Blogs
Blog Directory & Search engine
Add to Technorati Favorites



You are visitor number :
Hit Counter

Comment & Testimoni
Feed Reader

Enter your email address:

Delivered by FeedBurner

Adsense Tip's and Trick's

↑ Grab this Headline Animator

BLOGGER

© 2006 TOP LEVEL SEO