Monday, November 14, 2016

Use of Student Grades in Evaluating Teachers, and Use of State Test Results to Evaluate Disctrict



EDU 6160 bPortfolio Post 6                            Ian Lewis                                 November 14, 2016

Discuss the use of student grades as a factor in evaluating teachers, and use of state test results to evaluate a school district.

Conference week (November 7-10) just ended, and while it was very long and exhausting, it was something I was looking forward to for awhile. While grades were an important aspect of the student led conferences, they were just a small part of the whole process of allowing the student to guide their parents through their own learning to date, while using data (e.g. test scores, assignment grades, student reflection in writing folders) to help guide reflection, feedback, and goal-setting for the next quarter. For some of the students, reading the scripted, “I am leading this conference because I am responsible for my own learning” was given, but for some students it was the first time this idea really sunk in despite attempts to continually reinforce this idea in class. At the start of the school year, achievement goals were established based on pre assessment of student learning and these are tracked throughout the year. Conferences at the quarter end offer a way to assess data and progress toward goals, which inevitably factor in evaluating teachers. 

After discussing their greatest accomplishment and biggest worry, students then guide their parents through the pre assessment that has guided their goal setting and reflection in various subject areas, before finishing with class highlights and work examples. Multiple pre assessments, for example, assist with student setting of Accelerated Reader (AR) goals: the STAR test provides a quicker feedback to a usually reliable indicator of student reading level, and the Gates MacGinitie Reading Test offers scores based on percentile rank and extended scaled score of progress across grades two to 12 in areas of vocabulary and comprehension. Both test score ranks provide corresponding AR reading levels and point goal ranges to aid in student goal setting, which is discussed regarding the context of success in completing the goal and setting another for next quarter. Additionally, these reading test scores are presented with last year’s SBAC scores for reading and writing (scaled score 1-4) in order to facilitate conversation with regard to progress for the remainder of the present year and what it will take to raise, keep, and/or (but hopefully not) drop said scores at the end of the year. Traditionally, the University Place School District maintains higher test scores than surrounding districts, and this is also discussed, not just with parents and students at conferences, but as the topic of multiple staff meetings. Because of this standard, high test scores are used as part of the evaluative process of teachers, and while I do not believe low scores would be grounds for discipline, it would suggest a greater need for emphasis on teacher reflection and attention to best practice.

I was excited for the conferences because as an anthropologist, and now teacher, which I would argue share remarkable resemblance, I was able to share what I had observed for the past two months. These observations, as well as those of my mentor, and students and parents alike, all then contributed to a discussion of student progress. As stated before, grades were an important part of this discussion, but not its entirety. Reflecting on grades and goal-setting with regard to growth, however, were just as important. Student grades inevitably factor into teacher evaluation, as do test scores, in teacher and district evaluation, but as the conferences show, the grades and scores are just one piece of the student’s academic puzzle.

Monday, November 7, 2016

Practicality of Surveys and Inventories for KWL Process in Your Teaching Situation




EDU 6160 bPortfolio Post 5                            Ian Lewis                                 November 7, 2016

Discuss the practicality of surveys and inventories for the KWL procedure in your teaching situation.

Within my teaching situation in social studies I have not seen a specific survey or inventory, as discussed and illustrated by the Theory to Practice Box 11.3 (Shermis and Di Vesta, 2011, p. 325; adapted from Conner, 2006) used. The KWL process, however, is deeply embedded within the direct instruction and learning activities. In contrast, we have used multiple specific surveys and inventories in English.

As social studies curriculum is a continuation of that which was learned in prior years, 6th grade (ancient civilizations) curriculum naturally transitions to 7th grade (middle ages) just as…these time periods naturally follow in chronology…and thus tying in what the students know is often part of everyday lesson. Furthermore, it is necessary to simply understand the natural relationships across time periods in order to understand the cause and effect system that is history. What this looks like in the classroom varies but often includes direct reference and/or discussion of relevant background knowledge, or  “What I Know” part of the process (e.g. Show of hands, who remembers anything about the Roman Empire before we explore the middle ages across the Byzantine and Muslim Empires?). The textbook does well with referencing background information to stimulate application of background knowledge. Across chapters (which are repeated chronologies of the middle ages, but in different geographical regions), themes and relationships, are referenced and drawn upon as well. The “What I want to know” part of the process comes in the form of students filling in notes, and/or graphic organizers/charts (often corresponding or similar to the social studies textbook headings and subheadings) with what we are learning to do as a class, accompanied by textbook reading. Learning targets, referred to as WALTs, for We Are Learning To, are written on the board and referenced daily. The “What I Learned” comes in the form of review activities, application activities (e.g. a project where students create a graphic representation of the three branches of government – need not be a tree as portrayed in the text), and assessment (informal and formal). The KWL process is present, but I have never actually used a graphic organizer or posed the context of learning as specifically “What I KNOW, What I WANT to Know, and What I LEARNED” (Shermis and Di Vesta, 2011, p. 325; adapted from Conner, 2006).

In English, there are two specific examples that come to mind that explicitly reference “Knowing, Wanting, and Learning”. Vocabulary practice may include a pre-assessment inventory of the vocabulary words in a reading selection where students assess the words they know and then find the definition of the words remaining they need (and ideally want) to know (via context clues and/or dictionary). Students display what they have learned by using the word in a context clue rich sentence. A second application of the KWL process to use a specific inventory regards collection of information for literary elements such as settings, foreshadowing, characters, and characterization.

When I was substitute teaching last year, multiple science assignments had me playing a video with an accompanying KWL chart. After filling in the “Know” part of their charts individually, we discussed as a group the cumulative knowledge on the subject. The “Want” was then completed individually during the movie, and the “Know” included individual processing and group discussion. I feel in this situation, it was easy for the teachers to leave a substitute a simple task accompanied with a simple and specific KWL activity and survey/inventory, with no actual direct instruction necessary, however un-engaging. Across subjects, the specificity to actually addressing the parts of the KWL process, let alone completing specific KWL inventories, may be better applicable in some areas than others. The KWL process is present across subjects and contexts, but may not specifically be addressed in each.

Monday, October 31, 2016

Relevance of Essay Tests to Grade/Subject Level



EDU 6160 bPortfolio Post 4                            Ian Lewis                                 October 30, 2016

Discuss the relevance of essay tests to your grade level or subject.

Essay tests are incredibly relevant to the grade level and subject areas of my internship experience. The Smarter Balanced Assessment Consortium (SBAC) test that each seventh grade student completes near the end of the year includes an essay portion where students read selected passages and respond in essay format to a relevant prompt. In order to prepare students for this test, while also developing skills required of seventh grade common core state standards (CCSS), students work throughout the year to master scaffolded steps that lead to the completion of entire informative, comparative, or persuasive essays.

In English Language Arts (ELA), the necessary skills are constantly drilled and practiced with regard to essay tests, but social studies as well experiences practice that inevitably assists with the completion of essay tests. At the start of the year, a pre assessment essay prompt resembling that incorporated in the SBAC was given to students to assess their entrance essay skills in ELA. These were graded using a 32 point rubric that then corresponds to a scaled system of writing bands (0, 0.5, 1, 1.5…4), where students are taught ways and encouraged in how to increase the band level of their essay writing through attentiveness to each portion of the rubric (thesis, intro. paragraph, concluding paragraph, craft/voice, etc.). Further essays will use the same rubric so that students may continually track and reflect on necessary skills and areas for progress. However, at this point, we have not actually written any essays or had any further essay tests. This is because we are still building foundational, scaffolded skills necessary for an essay, such as the ability to write effective summaries, and the ability to craft a single, efficient informational paragraph that incorporates the use of text evidence to support ideas.

The ability to adequately summarize presents itself in an essay via its introduction. In crafting a thesis statement relevant to two text excerpts, for example, a student will be required to create an introductory paragraph that introduces the texts (titles and authors), briefly summarizes their main points, and states how such main points relate to a thesis topic, all of which are steps practiced in writing a summary based on our format of Somebody (character), Wants (goal), But (conflict), So (rising action, and climax), and What (falling action, and resolution). In addition to useful summary skills, single paragraph expertise is required before an essay can be mastered.

To create a format that is easily transmitted to essay application, students are taught a specific paragraph structure for writing information paragraphs, which were practiced twice this quarter. The Step-Up paragraph model uses a combination of three chunks of three sentences (main idea, incorporated example from text, and explanation of text evidence as it relates to topic sentence), and the emphasis of transitions between main ideas, to enable students to create simply structured informational paragraphs that may later be expanded into essay format; the three main ideas turn into three body paragraphs, each of which provides three text examples and three associated explanations. As second quarter begins next month, we will continue to practice this single paragraph model, but we will now apply it to the creation of comparative and persuasive paragraphs. Eventually, we will be ready to practice essays by expanding this single paragraph structure. As stated before though, it is necessary to teach the foundational skills necessary before jumping straight into entire essays.

In addition to specific paragraph writing practice, students experience short answer questions on all of their ELA reading selection quizzes. While these are just short answer questions that is not to say they are not useful in establishing essay test skills. As previously discussed, our paragraph model places an emphasis of incorporating text evidence and explaining this evidence in relation to a main idea (skills necessary for crafting successful essays), and the short answer questions always lend themselves to being turned into a single paragraph response following this model. While not the primary focus, social studies also allows for essay skill practice. The social studies unit tests include short answer questions as well that allow students to practice foundational skills to be used later when writing expanded essays. Eventually, the single paragraph practice will turn into the essays and students will begin to be able to track their growth and progress as essayists using the previously described rubric and band system.


Sunday, October 23, 2016

Strengths and Limitations in Rubrics



EDU 6160 bPortfolio Post 3                            Ian Lewis                                 October 23, 2016

Describe a rubric used for a unit you might teach with attention to strengths and limitations.

During the first social studies unit in my 7th grade internship, we applied the use of a grading rubric for a mapping project regarding the Byzantine Empire. The students were largely successful in following the step-by-step instructions provided regarding labeling and coding map features. However, in hindsight, the rubric did not quite reflect the importance of the completeness and accuracy of mapping, a primary goal. Rather, the rubric weighed heavily on more subjective categories of “neatness”, “effort”, and “color”. Regarding completeness, a student could miss 1-5 items and only go down one level of the rubric. The map could inherently be useless, missing more than ten items (cities, bodies of water, etc.), but still receive partial credit. But then if it were neat and outlined, shaded and detailed, both of which state a subjective quality in each pairing, the student could earn points for components that truly were not based on the learning target. My mentor and I reflected on this…and immediately got to work changing the rubric.

The new rubric is 76% weighted toward location and accuracy of placement of geographic and political map features, the remaining 24% devoted to aspects of the previous “neatness”, “effort”, and “color”, yet now far less subjective. Nineteen features (cities, bodies of water, map elements such as a legend, etc.) now may each receive up to two points for aspects of presence and accuracy, one point for one of the two aspects, or zero points for neither aspect. The coloring is less subjective, in that it focuses on allocating points for presence and accuracy of location of shading versus cross-hatching the territories of the Byzantine and Muslim empires, respectively. There is now just a single category where points are allocated for “Neat/complete coloring, legible ink labels, obvious time and effort” (4 points), versus “Mostly complete…” (3-2), versus “Lacking color, pencil labels, difficult to identify features” (1-0). We feel this edited rubric will allow for a better assessment of student comprehension as it focuses more on the aspects of the learning target and objective: to identify and label prominent locations in the Byzantine Empire in order to understand how physical geography affected development of societies, rather than allocating major points toward subjective categories of neatness and effort.

There were multiple limitations in the rubric that we used for the unit mapping project. I suppose the important thing is that we found these limitations and did something about them, illustrating the continual learning and reflection in one's own practice. Shermis and Di Vesta (2011, p. 136-137) note how it is important not to include elements on a rubric that are not related to the primary performance task being assessed, and to craft rubrics so that they may be used to show consistency in rating between varied users (a scientific principal in validity). The edited version aligns better with the learning target, removing unnecessary elements, as proposed by Shermis and Di Vesta, as well as reduces subjective elements that would cause difficulty in producing similar results between varied graders. My mentor is excited to use the new rubric next year. I am glad to have been part of the editing and construction of a more efficient and purposeful rubric, which shows how educators are constantly reflecting, modifying, and making adjustments in order to better organize, align and present material and assess student comprehension of said material.

Reference List:

Shermis, Mark D. and Di Vesta, Francis J. (2011). Classroom Assessment in Action. Rowman &   Littlefield Publishers, Inc., Plymouth, UK.