A search engine operates in the following order:

Web Crawling: automated programs methodically browse through the web. Other terms for Web crawlers are ants, bots, and spiders. Meta tags in the pages found by the crawlers are analysed to determine how they should be indexed.

Indexing: collects and stores data to speed up information retrieval.Without an indexing, search engine would scan every document on the web, requiring considerable time and computing power.

Search: this is the matching of information retrieved from the crawlers and indexing to the specific terms entered by the user.

LIMITATIONS: Although search engines are programmed to index and rank websites based on their popularity and relevancy, various political, economic, and social biases effect the information they provide. For example, companies pay to have their results feature higher in a search, and political interests may remove search results to comply with local laws. Google Bombing is one example of an attempt to manipulate search results for political, social or commercial reasons.




Making sites more appealing through the navigation design and overall user experience.

LIMITATIONS: This does not provide a full view of the web, simply the most appealing view of it, which is usually the view that has the most money to spend.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s