Tuesday, May 1, 2012

Assessing your searching abilities, Day #6--Searching and Evaluating like a Pro

Searching like a pro is a multi-step process. And, in order to effectively evaluate searching abilities, searching needs to be assessed just as other objectives are.

To search like a pro, you must:

  1. Step 1: Create search questions based on your topic
  2. Step 2: Develop search keywords from your search questions
  3. Step 3: Try out the different search words and decide which ones yield the best results
  4. Step 4: In order to decide which ones yield the best results, you must ask the search questions again and see if the results answer the questions (also included is finding the best search engine for your topic)
  5. Then, you must move on to evaluating the results you have chosen to use
In order to learn this process, students need to be assessed over each of the searching steps. This will ensure that they master the process. If students master the process, they will be able to think critically about the world around them. It won't be a matter of luck that they found the correct source. 

So, how do you measure whether or not students "get" each of the steps? Try some of these assessments and activities for teaching and assessing each of the four steps. 

Step 1:

Step 2:

Step 3:

Step 4
UC Berkeley has a useful five-step guide to searching as well. Kathy Schrock's Information Literacy Primer is also a good resource to use when trying to search and evaluate like a pro. 

As a reminder (courtesy UC Berkeley), it is important to understand how search engines work before using one: 

How do Search Engines Work?
Search engines do not really search the World Wide Web directly. Each one searches a database of web pages that it has harvested and cached. When you use a search engine, you are always searching a somewhat stale copy of the real web page. When you click on links provided in a search engine's search results, you retrieve the current version of the page.
Search engine databases are selected and built by computer robot programs called spiders. These "crawl" the web, finding pages for potential inclusion by following the links in the pages they already have in their database. They cannot use imagination or enter terms in search boxes that they find on the web.
If a web page is never linked from any other page, search engine spiders cannot find it. The only way a brand new page can get into a search engine is for other pages to link to it, or for a human to submit its URL for inclusion. All major search engines offer ways to do this.
After spiders find pages, they pass them on to another computer program for "indexing." This program identifies the text, links, and other content in the page and stores it in the search engine database's files so that the database can be searched by keyword and whatever more advanced approaches are offered, and the page will be found if your search matches its content.
Many web pages are excluded from most search engines by policy. The contents of most of the searchable databases mounted on the web, such as library catalogs and article databases, are excluded because search engine spiders cannot access them. All this material is referred to as the "Invisible Web" -- what you don't see in search engine results.

No comments:

Post a Comment