Which of the following is the best example of summative assessment?

Portfolio Assessment

V. Klenowski, in International Encyclopedia of Education (Third Edition), 2010

Assessment and Teacher Judgment

Summative assessment of the portfolio is designed to provide quality information with regard to student learning in a timely, manageable, and inexpensive manner without impacting negatively on teaching and learning. It is high stakes and occurs for certification and selection in a range of contexts. The selection or certification process can provide the individual with a statement of achievement for entry into a profession, to further education, or it can even lead to promotion. An adequate level of reliability is, therefore, required for comparability purposes. Consistency of standards, and consistent grading, must be implemented to ensure equity and fairness and to ensure quality in the overall assessment process and outcomes. Specified standards and contents frameworks aim to achieve a reasonable degree of reliability and to ensure a level of confidence in the results and comparability across institutions. The standards framework and attributes to be assessed are generally specified by awarding or professional bodies such as: teacher education (Lyons, 1998), higher education (Fry et al., 1999), and doctoral studies (Shulman et al., 2006). Professional development for continuing learning and development is also assessed using portfolios (Baume and Yorke, 2002; Orland-Barak, 2005; Jackson and Ward, 2004; Hay and Moss, 2005).

When assessing the portfolio summatively, the consistency of approach to the assessment tasks and consistency of teacher judgment in assessing the portfolio of work, using the standards framework, need to be monitored. Replicability and comparability are the key qualities (Gipps, 1994). Professional judgment in the use of criteria and standards for assessment is developed through moderation practice. Such professional collaboration helps to maintain consistent application of standards.

Holistic or analytic approaches can be adopted to assess the portfolio of work. In the latter approach, different aspects of the portfolio are assessed independently and judgments with regard to the quality of the parts are aggregated to obtain a total grade. Holistic approaches require a judgment with regard to the overall quality with attention to how the individual tasks or samples of work contribute to the whole. The multiple entries of the portfolio require the assessor to engage in iterative and cyclical processing sequences which differ to the assessment of a single work. Major threats to validity and reliability can occur when assessors omit the use of important criteria provided, “construct underrepresentation” or give particular weighting to criteria while not attending to the given criteria with equal evaluative attention, “construct-irrelevant variance” (Messick, 1995).

The use of portfolios for formative purposes to enhance learning and for professional development is well established. One of the major advantages of the portfolio for learning purposes is the opportunity it provides to monitor development and for teachers and instructors to provide feedback to the learner to fulfill a transformative function in the learning. To achieve this, there is a need for substantive conversation concerning the qualities of the learning. This implies a facilitator role for mentor or teacher. Student self-reflection – promoted through conversation – requires interactive dialog that facilitates student recognition of strengths or weaknesses in their learning. Insights regarding how to improve are an intended consequence of this process. Thus, the portfolio provides the structure and process to facilitate understanding of one’s own learning or professional practice (Klenowski et al., 2006) and for increasing one’s self-regulating capacity.

Research on how best to support learning in professional contexts and how to assess the rich, qualitative materials in portfolios concludes that a hermeneutic, interpretative approach is appropriate (Tigelaar et al., 2005).

The resource implications of this approach are acknowledged any method for interpreting portfolio evidence in an equitable and responsible manner will require time and substantive conversation. Professional dialog in the context of teacher education, for instance, is fundamental for realizing the potential of this form of assessment and for engaging pre-service teachers in understanding deeply the meaning of effective teaching.

The problematic nature of the use of reflective statements in portfolios for learning and professional development has been researched and is more apparent when portfolios are assessed summatively (Chetcuti et al., 2006; Orland-Barak, 2005). Students are reluctant to include authentic reflections that illustrate their areas of weakness or gaps in learning and can revert to “tactical writing” to convince the assessor of their achievements (Meeus, et al., 2006). Students will not be inclined to reveal their failures (Smith and Tillema, 2003). Such reflections incorporated in the portfolio will not be reliable or genuine.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080448947003158

Assessment and the Regulation of Learning

L. Allal, in International Encyclopedia of Education (Third Edition), 2010

Continuity between Formative and Summative Assessment

Although formative and summative assessments have clearly different goals, the question can be raised as to their possible synergy in promoting learning (Harlen, 2005). The pressure of summative assessment, particularly when it is linked to frequent standardized testing, often leaves little space for the practice of formative assessment to develop. The consequences for the regulation of learning can be highly detrimental, especially for students who encounter failure repeatedly and cease trying to exert any control over their own learning. At the same time, summative assessment is necessary as a means of assuring social recognition of students' accomplishments both in school and outside. Students themselves inevitably ask: What knowledge and skills have I in fact acquired? Also, how do they measure up to expectations in society at large?

Continuity between formative and summative assessment can be developed in several ways. The first is through the alignment of both types of assessment with the curriculum goals underlying teaching and learning in the classroom. If this alignment is clearly perceived by students, the impact on their own goal setting can be very strong. A second point of continuity concerns the development of means of reporting the results of summative assessment so as to provide students with high-quality feedback about learning outcomes. When students receive a profile of test results, a graph comparing outcomes on different parts of a test, a set of rubrics describing the qualities of a text, or teacher comments that accompany a grade, they can use this information to regulate their subsequent investment in learning. A third point of continuity has to do with student involvement in summative assessment. This form of assessment inevitably entails a judgment formulated by a professional (teacher, examiner, or other expert) about the quality of student learning. It is possible, nevertheless, to develop some degree of active student engagement in the way summative assessment is conducted. For example, in portfolio assessment used for summative purposes (grading and certification), students can participate in the selection of the work samples to include in the portfolio and be asked to write self-reflective commentaries that accompany and put into perspective their work. In professional education, summative assessment often takes place in conferences where the self-assessment expressed by the student is confronted with the assessment formulated by the teacher or supervisor. Students' knowledge of the conditions in which summative assessment will take place can have an important influence on the regulation of their investment in leaning prior to being assessed. Teachers' knowledge of these conditions can have an equally important influence on how they organize learning activities and interact with students. To conclude: both formative and summative assessments provide explicit frames of references that guide the processes of co-regulation of student learning.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080448947003626

Assessment in Schools – Primary Science

W. Harlen, in International Encyclopedia of Education (Third Edition), 2010

Reporting Individual Student Performance

Summative assessment provides information with regard to what students have achieved at certain times. As in the case of formative assessment, the assessment process involves gathering evidence, interpreting it, and reporting it. In some circumstances, the information is fed back into teaching, but this is not the primary purpose.

Evidence for summative assessment can be gathered in various ways:

by special tests or tasks designed for students to show what they can do at a particular time;

by summarizing evidence from regular workup to the time of reporting; and

by combining evidence from ongoing work and special tasks of tests.

Special Tasks or Tests

Assessment for summative purposes needs to be highly reliable as it may be used for grouping or selection that can affect the future learning opportunities; for that reason, there is an attraction in using special tasks or tests. The items or tasks can be specified and controlled and presented in the same way to all students, and marked using the same rubrics or criteria. Thus, they appear to give students the same opportunities to show what they can do at a particular time and, therefore, are fair. However, giving the same tasks is not the same as giving the same opportunities; students vary in their reaction to – and, therefore, in their performance in – different tasks which appear to make the same demands. The reliability of a test is limited by the facts that only a small number of tests items or tasks can be included and a different selection from all possible ones could lead to a different result. Calculations based on the national tests taken by students in England at the end of the primary school indicate that this effect could result in at least a third of pupils being given the wrong level (Wiliam, 2001).

The effect of the selection of items or tasks is particularly serious in science since tasks that attempt to assess inquiry skills, problem solving, and the application of concepts in real situations are necessarily time consuming and only a small number can be included in a test. Students’ response to them is highly dependent on the choice of content. In a study, in the United States, by Pine et al. (2006), fifth grade students were assessed using several hands-on performance tasks, including one based on Paper Towels (in which they had to test which of three different kinds of paper towels would hold most water) and one called Spring (about the length of a spring when different weights were hung on it). The researchers found “essentially no correlation for an individual student’s scores. Students with either a 9 or a 1 Spring score had Paper Towels scores ranging from 1 to 9” (Pine et al., 2006: 480). Because this is a large task-sampling variation, obtaining a reliable score for an individual student would require the individual to tackle a totally unacceptable number of tasks.

This is an example of the interdependence of validity and reliability – where greater validity tends to mean lower reliability. The corollary is also true. Where greater reliability is paramount, the selection of items for a test gives preferences to those that can be marked unambiguously as correct or incorrect. These are usually ones requiring factual knowledge rather than the application of skills – lowering the overall validity of the test. Low validity is a serious defect of tests since what is tested is inevitably taken as an indication of what is important to teach. The effect is exacerbated by high-stakes use of test results, which not only demand high reliability in the items but put pressure on teachers to use methods that seem to be necessary for passing the tests (Harlen and Deakin Crick, 2003). This was shown clearly by Johnston and McClune (2000) in their work on the effect of the high-stakes tests taken by students at the end of primary school to decide the type secondary school to which they transfer. They found a high proportion of teaching through transmission of information and highly structured activities, with little time for direct investigation by students. When interviewed, the teachers indicated that they felt constrained to teach in this way on account of the nature of the tests.

Summarizing Evidence from Regular Activities

The problems of test validity are avoided by using the evidence that teachers collect during teaching. In theory, this will cover all goals of learning and will mean that evidence can be used both for formative assessment and for summative assessment. However, the important difference between these purposes has to be borne in mind – that, for summative assessment, the evidence has to be interpreted in terms of levels or grades, whereas, in formative assessment, interest is only in identifying next steps and how to take them. The indicators of development in Tables 2 and 3 are at too detailed a level for summary reports. What is needed for reporting to parents or to other teachers is, at most, an overall judgment concerning what has been achieved in terms of, for instance, knowledge and understanding of life processes and living things or scientific inquiry skills.

The process of making a judgment involves comparing the best evidence from the work during the time over which performance is being reported with the criteria defining grades or levels. This does not require the retention of vast amounts of work since the aim is not to arrive at an average level of performance, but one based on the most recent and best performance. In the case of written work, this can be accumulated in a folder in which earlier pieces of work are replaced by later ones that better reflect students’ developing abilities. Students can also help in this selection and, in the process, acquire some understanding of the broader goals of their work and of the criteria by which its quality is judged.

While teachers’ observations and the collection of students work can provide highly valid information, its reliability may well be low – unless appropriate action is taken to ensure consistency in using criteria. There are various ways in which teachers’ judgments can be aligned, or moderated. These include teachers meeting to discuss specific examples of work and comparing their judgments, using exemplars of assessed work – either those published or created within a school, or using a brief test or set of tasks designed to indicate work at a certain level.

A Combination of Ongoing Work and Special Tasks

In some systems, both teachers’ judgments and test scores are required in summative assessment – in recognition that tests cannot cover the full range of goals. However, difficulties arise when attempts are made to combine or compare these measures. For reasons mentioned earlier, greater weight is generally given to tests, and it is often forgotten that since tests and teachers’ judgments assess different aspects, it is to be expected that their results differ. Thus, using tests to moderate teachers’ judgments is a dubious practice. It is preferable for tests and special tests to be used to supplement the evidence teachers have in coming to their judgments. Used in this way, tests and tasks can fill the gaps where regular activities have, for one reason or another, not provided evidence of certain kinds. They are also particularly useful, as good examples, for new and inexperienced teachers. In order to avoid some of the disadvantages of tests, these tasks can be embedded in normal work. The Walled Garden tasks described by Schilling et al. (1990) provide an example of embedded tasks (see Table 3).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080448947003298

Conceptual Apparatus

Jytte Brender, in Handbook of Evaluation Methods for Health Informatics, 2006

2.1.2 Summative Assessment

The purpose of summative assessments is to contribute, with a kind of statement of properties of the object in a decision-making context. Typical examples of summative assessments are (1) evaluation of objectives fulfillment or (2) the kind of assessment carried out in a contractual relationship, when an IT system is delivered and one wants to ascertain that the installation functions in accordance with the contract.

In summative assessments of the functional mode of operation it is usually taken for granted that when the users sit down by the IT system in order to evaluate it, it is (reasonably) free of programming bugs. However, this is an unreasonable request in situations when constructive assessment is used precisely to guide the future direction of the development. Instead one has to handle errors as a necessary evil.

In its philosophy the Health Technology Assessment (HTA) is summative by nature: An analysis is carried out of a number of qualities in a device, a technology, or a service with the aim of providing a basis for a political or administrative decision. Once the results of an HTA are ready, these can form part of a negotiation process with a vendor, thereby giving it a constructive (sub) role.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123704641500022

The Purpose of Educational Evaluation

S. Mathison, in International Encyclopedia of Education (Third Edition), 2010

Learning

Often referred to as summative assessment or assessment of learning, this is evaluation that occurs at the end of a period of instruction and often results in a grade that summarizes what a learner knows. The period of instruction may be a unit of study, a project or paper, a complete course, or even a larger timeframe like fourth grade. Evaluation of learning in this context might take the form of quizzes, chapter tests, end of term examinations, or government mandated standardized tests. Evaluations of learning done for accountability purposes do not play a role in learning or teaching as they occur after the completion of educational interventions and are meant to label the level of performance or achievement.

Performance measurement is the term that typically captures a more general accountability for learning, the sort of broad assessments of educational attainment such as international tests of achievement like the Program for International Student Assessment (PISA) or the US National Assessment of Educational Progress (NAEP). These evaluations are meant to provide information on student outcomes to make judgments about the quality of education at local, national, and international levels. In addition, these evaluations often draw attention to differential effects, such as the achievement gap between social classes or racial groups.

While evaluations of learning that are done for accountability purposes are often equated with summative evaluation, this is accurate only for the specific learning context and participants. Such evaluations can be used for formative purposes, but only in the sense of the improvement of similar, but future teaching and learning events (Figure 1).

Which of the following is the best example of summative assessment?

Figure 1. Relationship between evaluating learning for accountability and ameliorative purposes.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978008044894701592X

Online assessment

Robyn Benson, Charlotte Brack, in Online Learning and Assessment in Higher Education, 2010

Scheduling assessment tasks carefully

In linking formative and summative assessment, scheduling plays an important part in assessment design. It has increased impact in a fully online environment when communication around assessment tasks may be the main form of contact you have with your students. Establish regular contact related to assessment beginning with an early diagnostic task.

For example …

Set an early diagnostic assessment task to make contact with students, determine their needs, and provide an opportunity for early guidance and feedback. This will help to develop students’ confidence, establish a relationship with them, and provide some early direction for future assignments. If you have a large enrolment, devise a task which does not require a lengthy comment, and to which you can respond on a simple marking form designed to fulfil the role of providing feedback. Speed up your feedback by responding online.

Schedule assignments throughout the semester so that there are opportunities for progressive feedback and ensure that you provide feedback well before the next assignment is due.

Where possible, try and ensure that the tasks you set do not clash with assessment tasks for other, related subjects.

Although it is important that students receive early feedback, it is helpful if early assessment tasks have minimum summative weighting while the students are becoming familiar with requirements. Providing quality feedback while minimising assignment turnaround times can be a difficult balance to achieve (though we have already mentioned some options in relation to ‘who assesses?’ and more will emerge as we consider some of the online options).

Continuous assessment, through a set of assessment tasks (preferably interlinked) is much better for learning than many of the alternatives, and it is another strategy which reduces opportunities for plagiarism. It can also contribute to validity of authentic assessment tasks which are individualised for students. However, it does present some challenges in terms of management. It means continuous time spent assessing rather than the ‘one-hit’ of marking an assignment or examination. While continuous assessment is valuable, take care not to over-assess, as that will have negative consequences for both you and your students. All you need is an appropriate sample of student learning that reflects the unit objectives.

For example …

In a project with several parts, asking students to provide an outline of one part might be sufficient to determine how they are progressing and give them effective feedback for developing the assessment task further.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843345770500043

Summative Assessment by Teachers

R. Daugherty, in International Encyclopedia of Education (Third Edition), 2010

Which Teachers’ Judgments?

Discussion of the merits of summative assessment by teachers has sometimes been clouded by a lack of clarity as to which teachers are called upon to judge student performance. There are three possible answers to that question:

a teacher who has also been responsible for setting and managing the work that is to be judged (the GCSE example above);

a teacher working closely with the teacher who has set and managed the work, possibly in the same school (the Vermont portfolios example above); and

a teacher whose workplace is equivalent to, but entirely separate from, the school in which the work they are judging was undertaken (the New Zealand NEMP example above).

For teachers in the first of these categories, their knowledge of the students concerned could enable them to make more appropriate inferences from the evidence than would be possible for a teacher who did not know those students. On the other hand, there is also evidence from research (Harlen, 2004) of bias in judgments by a student’s own teachers, much of it unintentional. In the third category, the rationale for teachers judging student performance is that they are drawing upon the professional expertise they bring from their own classroom experiences.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080448947003638

Assessment in Vocational Education

K. Ecclestone, in International Encyclopedia of Education (Third Edition), 2010

Formative and Summative Activities and Processes

In the UK's vocational education system, many tutors regard their main assessment role as a translator of official criteria and specifications which are usually detailed and prescriptive. It is commonplace for tutors to break up strongly framed assignment briefs into sequential tasks to meet each criterion. However, at higher levels of the vocational pathway, this becomes less prescriptive and more open ended. Students prepare their assignments by working to the official criteria specified for grades in each unit. In many courses, students can submit a completed draft for feedback which tells them how they can improve their responses to different criteria. However, there is wide variation in formal arrangements for this: in some courses, drafting is done numerous times while in others, only one opportunity is offered. A large amount of lesson or contact time is used to introduce students to each assignment and also to talk through the outcomes of draft assignments, and to allow students to work on assignments.

Formative feedback often takes the form of written advice to plug and cover gaps in the criteria, cross-referenced to the assessment specifications. Vocational students have strong expectations that teachers would offer advice and guidance to improve their work.

Portfolios and assignments or projects provide opportunities for both formative and summative assessment. Reviews of progress and formative feedback can be based on a discussion of the portfolio of achievement, while unit/module-based assignments can be improved through feedback on a draft, or a number of drafts in order to pass or gain better summative marks.

Different countries place different emphases on formative or summative purposes. For example, the two purposes are blurred in the UK, with a strong emphasis from awarding bodies (regulatory examination bodies) on standardization of summative judgments between different centers. In Finland, vocational teachers working with workplace assessors are accustomed to using skills tests and other projects for formative purposes and less accustomed to national regulation or standardization. However, a project by the National Board of Education in Finland aims to use the outcomes of summative tests for national evaluation purposes (see Rakkolainen and Ecclestone, 2005).

In the UK's system, summative assessment, achievement and learning have become to a large extent synonymous, where assessment is not merely for learning or of learning: instead, it is learning:

The clearer the task of how to achieve a grade or award becomes, and the more detailed the assistance given by tutors, supervisors and assessors, the more likely the candidates are to succeed; but succeed at what? Transparency of objectives, coupled with extensive use of coaching and practice to help learners met them, is in danger of removing the challenge of learning and reducing the quality and validity of outcomes achieved…. assessment procedures and practices come completely to dominate the learning experience, and ‘criteria compliance’ comes to replace ‘learning’ (Torrance et al. 2005: 46).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080448947003328

Assessment in Schools – Oracy

A.L. Bailey, in International Encyclopedia of Education (Third Edition), 2010

Standards setting for oral language

As mentioned, large-scale summative assessments are often used for accountability purposes and can provide a scale score and a percentile ranking of students if norm referenced (i.e., evaluation of a student's performance relative to other students taking the assessment). However, to be most useful to educators, assessments should also provide levels or bands that are accompanied by descriptors of what skills students at a particular level can be expected to perform. Thus large-scale assessments can be both norm referenced and criterion based. These assessments undergo standards setting in order to determine what levels or cut scores on the test are considered good, fair, or poor performances. Standards setting is typically conducted by a panel of educators familiar with the language expectations of the different grade levels and the skills associated with different levels of oral-language proficiency. The issue of which standard or standards should be used to gauge a student's fluency is a continuing challenge and must be established anew for each testing situation and student population. In recent years, there has been a shift to standards-based assessment in schools worldwide. However, while these tests attempt to be overtly aligned to a set of desired standards, often the standards themselves have not been validated (e.g., shown empirically to be skills necessary for functioning as a proficient speaker). Even the most proficient speakers are known to make dysfluencies and errors at times (Davidson, 1994). Moreover, there are few norms for the oral-language development of monolingual school-age children (Nippold, 1995) that test developers can turn to for guidance.

From a sociocultural perspective, the standards-setting procedure faces the issue of privileging certain oral-language characteristics over others. For example, correct pronunciation is a cultural artifact. Setting the target pronunciation consistent with a particular regional or even national variety of a language is primarily a function of sociolinguistic preferences and even prejudices and not any inherent correctness of spoken English. (For example, Received Pronunciation in the UK and the Midwestern dialect in the US are used as the standard variety of the language in native speaker and English as a second language (ESL) contexts, and American-accented English may be chosen instead of British-accented English in the English as a foreign language (EFL) context.) Standards setting should at the very least be guided by which forms of the language will make the language learner intelligible to most speakers of the language, as well as take account of what level(s) of functioning society desires of the language learner.

Box 1 provides examples of classroom-based assessments by language domain that reflect points made in the prior discussion. Note that depending on the purpose of use, with little modification, these assessments can be used formatively for guiding ongoing instruction, diagnostically to drill down to a specific oral-language challenge for a student, or summatively to judge student attainment after a period of instruction.

Box 1

Examples of classroom-based assessments by domain of oral language

Listening-only assessments

 Novel natural speech input

 Tying the topic of the stimuli to other content areas can maximize information gained by the teacher (both language and concept knowledge).

 Responses can be nonverbal: drawing, picture matching, diagramming, model building, total physical response (head nod/shake; or pointing to objects) to show comprehension.

 Grammaticality judgment task

 Response can be simple check marks or true/false circling to indicate whether a spoken utterance is a legal sentence in the language.

Speaking-only assessments

 Production of oral reports, story retelling, personal narratives, or picture descriptions

 Again, the stimuli can be made maximally useful by choosing topics from other content areas.

 Production will overtly require discourse skills (e.g., genre knowledge, organization of language); however, other language skills will be revealed (e.g., pronunciation, lexical knowledge and diversity, and grammatical accuracy).

 SOLOM (student oral-language observation matrix)

 Production is spontaneous speech during classroom activities scored using a multilevel rubric for a variety of oral-language skills.

Listening and speaking assessments combined

 Grammatical imitation task

 Response is the direct imitation of the stimulus utterances. Students typically cannot accurately imitate what they do not already have command over.

 Authentic interaction

 These include question–and-answer sessions with the teacher, interviewing peers, participating in plays, and partner/group discussions and debates.

 Referential communication tasks

 These can take the form of a barrier task, for example, describing a route or an object separated from a naive partner by a screen.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080448947003274

Intelligent Systems

V.J. Shute, D. Zapata-Rivera, in International Encyclopedia of Education (Third Edition), 2010

Types of Assessment

We consider here two main types of assessment: summative and formative. Summative assessment reflects the traditional approach used to assess educational outcomes. This involves using assessment information for high stakes, cumulative purposes, such as promotion, certification, and so on. It is usually administered after some major event, like the end of the school year or marking period or before a big event, like college. Benefits of this approach include the following: (1) it allows for comparing student performances across diverse populations on clearly defined educational objectives and standards; (2) it provides reliable data (e.g., scores) that can be used for accountability purposes at various levels (e.g., classroom, school, district, state, and national) and for various stakeholders (e.g., students, teachers, and administrators); and (3) it can inform educational policy (e.g., curriculum or funding decisions).

Formative assessment reflects a more progressive approach in education. This involves using assessments to support teaching and learning. Formative assessment is tied directly into the fabric of the classroom and uses results from students' activities as the basis on which to adjust instruction to promote learning in a timely manner. This type of assessment is administered much more frequently than summative assessment, and has shown great potential for harnessing the power of assessments to support learning in different content areas and for diverse audiences. In addition to providing teachers with evidence about how their students are learning so that they can revise instruction appropriately, formative assessment may directly involve students in the learning process, such as by providing feedback that will help students gain insight about how to improve, and by suggesting (or implementing) instructional adjustments based on assessment results.

We now turn our attention to intelligent computer-based systems which have been around for several decades, but have yet to be fully embraced by education. Their primary goal is to enhance student learning; so assessment should, in theory, play a key role in these systems.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080448947002475

What is the best example of summative assessment?

Summative assessments are often high stakes, which means that they have a high point value. Examples of summative assessments include: a midterm exam. a final project.

Which of the following is an example of a formative assessment?

An essay, project, quiz, test, or informal check for understanding can serve as a formative assessment if the data is used to adjust instructional strategies to meet the needs of students at various levels of learning.

Is a quiz a summative assessment?

Summative assessments are quizzes and tests that evaluate how much someone has learned throughout a course. In the classroom, that means formative assessments take place during a course, while summative assessments are the final evaluations at the course's end.

What are the 4 types of formative assessment?

Types of Formative Assessment.
Observations during in-class activities; of students non-verbal feedback during lecture..
Homework exercises as review for exams and class discussions).
Reflections journals that are reviewed periodically during the semester..