M10 - Assessing students in units

Introduction

In this Module, we focus on assessment for the purpose of grading students' knowledge, skills and attributes against a standard (summative assessment), both in units and programs. We will be building on previous understanding about constructive alignment from Module 8, in particular how assessment can be effectively aligned to outcomes and activities. We extend the assessment-feedback-reflection model from Module 6 to include standards and grading.


Learning Outcomes

At the end of this module you will be able to:

  • compare and contrast a range of assessment strategies and approaches
  • design and implement effective assessment strategies in mathematical units, including writing rubrics and marking guidelines
  • describe strategies to make mathematics assessment accessible, equitable, valid, and reliable.

Module Structure

The module proceeds as follows:


Summative assessment cycle

Summative assessment (i.e. assessment for the purpose of making a judgement about a student’s achievement) introduces two extra elements into the assessment cycle from Module 6 Assessing students in classes: that is, standards and marking. In contrast to formative assessment, the assessment is ‘high stakes’: the tasks are larger and the feedback is more formal. The students' responses are graded against a standard.

In this context a standard is the level of achievement of a student on the task. It may be signified by a numerical mark or a grade, for example High Distinction, Distinction, Credit and Pass. The standard may be set by an external body, or there may be grade descriptors at your university. The assessment task is set to allow students to demonstrate achievement of a standard. This is more important than you may first think. You need to set assessment tasks that will allow students to demonstrate achievement of a standard - sometimes the tasks are too easy and so the top students are not able to demonstrate their achievement, conversely the tasks may be too difficult to allow weaker students to demonstrate what they have learnt.

There is a real art to good assessment!

Assessment is incredibly powerful in indicating to students what we value as Ramsden points out in his book, Learning to teach in higher education:

“From our students’ point of view, assessment always defines the actual curriculum ... Assessment sends messages about the standard and amount of work required and about which aspects of the syllabus are the most important.” (Ramsden, 2003, p. 182)

In this Module we will discuss each of the elements of the summative assessment cycle, including designing tasks, defining standards, implementing reliable marking, providing feedback and reflecting on learning (Figure 1).

Figure 1. Summative assessment cycle

Workload for you and the student

Three summative assessment tasks per semester are sufficient; more than that is too much work for you and the student. Carefully check the amount of time students need to complete each task, so that they are not overworked and to allow time for reflection. A stressful workload for students can increase their tendency to adopt surface, rather than deep, approaches to learning (Ramsden, 1984, 2003); and while it is no justification, it is also a factor contributing to plagiarism. It is important to ensure that assessment tasks from different units taken by the student are not all due at the same time! This should be done at a program level.

The timing of summative assessment is a careful balancing act: too much too early, and students may not have time to develop their understanding and skills (Sadler, 2009); too ‘end-loaded’ and students will not have time to apply the results of feedback in order to improve in subsequent tasks (Hounsell, 2007). There are no easy answers to this juggling act, but these are factors you should consider.

Tasks

Let's start with the design of assessment tasks. Knight (2006) described a number of conditions that summative assessment should satisfy, including:

  • being faithful to the curriculum (in a broader sense than ‘content’)
  • aligning to the idea that higher education is concerned with developing skills that we have been describing as relational.

The idea that assessment actually assesses what was intended is termed validity. The learning outcomes for a unit have already defined what it is we want students to be able to do, so we should design tasks that will let them show they have achieved these. (This is a key message of constructive alignment, see Module 8.)

One of the easiest ways to implement defined intentions in designing learning tasks is to refer to a taxonomy - a categorisation of types of learning - which may include a hierarchy. The best known one is Bloom's taxonomy (Bloom et al., 1956) which was modified in 1996 as the MATH taxonomy (Smith et al., 1996) and again by Anderson and Krathwohl (2001). These taxonomies, and the distinction between instrumental and relational understanding, were introduced in Module 2.

Figure 2. The MATH taxonomy (Smith et al., 1996)

The advantage of referring to a taxonomy is that it helps you think specifically about testing different levels of understanding of the same concept. It also ensures that you are asking higher level questions and a range of question types.

A recent study (Bergqvist, 2007) examined the types of reasoning students taking introductory calculus courses were required to perform in examinations in Sweden. Tasks from 16 exams produced at four different Swedish universities were analysed and sorted into task classes. The study found that about 70% of the tasks were solvable by imitative reasoning (i.e. copying, pattern matching) and that 15 (of 16) of the examinations could be passed using only imitative reasoning. Bergqvist therefore proposes a reasoning taxonomy to help with writing examinations.

The point is that you want to create tasks that test the full range of quantitative reasoning and knowledge, so the use of a taxonomy can help you to design assessment tasks.

The design of tasks should also take into consideration the professional or general attributes described in the learning outcomes for your unit. For example, if you stated that students should be able to write well about mathematics, a well-designed multiple choice test would not validly assess this, even though it is possible to write objective items that require higher level understanding.
The following example shows questions which relate to different levels of understanding, and also incorporate different question styles as well as calling on general skills. You would not use all of them – the purpose of the example is to demonstrate the range of what is possible.

Table 1. Example of using a taxonomy to design tasks to test the concept 'continuity of a function'

Taxonomy category Task
Create Write a one hour learning module for your fellow classmates so that they will understand the importance of the idea of continuity in mathematics. You can use any media you wish.
Evaluate Look up continuity in two or three textbooks including the one set for your unit. Read the definitions and do the exercises.
  • Are the definitions in the textbooks written differently?
  • Which definition is better? Why?
  • Which set of exercises helped you understand continuity better? Give reasons for your answer.
Analyse Why is the idea of continuity important in mathematics?
Apply
Understand
  • Write down an example of a function that is continuous.
  • Write down an example of a function that is not continuous.
  • In about half a page, describe why differentiability implies continuity.
Remember What is the definition of continuity for a function?

Task 10.1 Designing questions

Choose a question from a recent assignment, exam or other assessment you have written or marked. What concept is it testing? Now design a set of questions to test that concept at different levels, using either the MATH taxonomy, or modelled on Table 1.

As well as the design of tasks, or items within tasks, there are a number of methods that you could consider using for summative assessment. Houston (2001) describes some of these (such as projects, posters, journals, oral presentations), and has an excellent and extensive annotated bibliography. Wood (2007) describes assessment tasks that develop skills that will be needed in the transition to work (authentic assessment).

We will discuss two: team projects and examinations, before moving on to the other parts of the assessment cycle. We will discuss two - team projects and examinations - before moving on to other parts of the assessment cycle.

Team projects and group work

The move to incorporating more team and group work into university learning comes from several fronts. The most important are:

  • Employers - requiring teamwork skills in graduates.
  • Students - social and peer interactions produce deeper learning.
  • Lecturers - able to set more challenging tasks and can reduce marking load.

There are several tools to help with teamwork. The Self And Peer Assessment Resource Kit (SPARK), developed by Keith Willey and Anne Gardner at the University of Technology, Sydney, is a good tool for assessing group work (University of Technology, Sydney, n.d.), and We have provided some other examples the Further reading section.


Don't be afraid to incorporate group work into your classes and assessment. You need to teach them a little about how to work in teams and groups and it can really be rewarding for both you and the students. If you don't feel confident in doing this yourself, you could perhaps get a colleague who has expertise in group work to introduce your class to working in teams.

Examinations

Examinations are a type of summative assessment task that is very common in mathematics. Well designed examinations are a reliable, efficient and effective way to assess many learning outcomes. Remember though that they will not be able to test all of your learning objectives, for example verbal communication skills or some professional skills.

A good discussion about writing mathematics examinations is in Smith et al. (1996) and a comprehensive, general guide to writing exams has been developed by Macquarie University (Hoadley, 2008).

Examination hints

Caveat: before making changes to previous examinations, check with your Head and with university policy. If you are making changes so that the examination is different from those in previous years, make sure you advise your students And always provide students with a practice examination in the style of the final.

Timing

Examinations are generally held at the end of semester during the examination period. (If you want to hold a mid-semester exam, you may not get the same kind of institutional support for running it, which if you have a large class, is a significant consideration.) Make sure that there is at least a week between the delivery of new material and the exam. One benefit of an examination is that students put the whole program together and hopefully make links between ideas. (Sadler (2009) has coined the descriptor fidelity to describe assessments which allow students to display attainment over the whole subject.)

You (or better, a colleague) should be able to complete your examination in about one-half to one-third of the time that students would take to complete it. Setting an exam that you think will take the students 2 hours and giving them 3 hours in practice will make little difference to the overall results and the students will be less rushed. It also caters for students who have learning and writing difficulties (more on that later). Also don’t feel you have to set long exams, or set questions on every topic in your unit, in order to measure student learning thoroughly. Short examinations can be reliable - you cannot ask everything. Just be sure to ask questions from different topics in subsequent years – and inform students that this is your practice, so that they do not limit their learning to the topics covered during the last year.

Conditions

You have various options for the conditions of the examination:

  • Seen or unseen questions, or a mix of these.
  • Computer examinations - appropriate if a CAS or knowledge of other software is required.
  • Aids - what are you going to allow into the examinations? Calculators, one page or sheet (double-sided) of notes, formula sheet, open book? These do not necessarily make a difference to the result - an unprepared student will fail whatever you do - but it will relieve anxiety. We have had good success allowing one page of handwritten notes into the examination, as students then spend a lot of time summarising the unit in preparation for the exam. Note: it is our opinion that students should not be required to memorise formulae or basic definitions, so these should be made available to students for their study to give them time to become familiar with them. This is authentic assessment, as you almost certainly look up things like this on a regular basis in your practice as a mathematician.

Marking

We talk more about marking later, however it is important to consider marking issues when you are writing the questions - especially for large classes.

Layout and signalling

Layout of an examination paper is important as well as the flow of the questions through the paper. Give signals to students to let them know the type and length of answers required. Use, in about half a page, show working, using diagrams etc to signal the length and form of an answer. You can also give signals by using the number of marks required. NSW HSC mathematics examination papers are very well written by top mathematicians and give you a good idea about layout and signalling (Board of Studies NSW, n.d.).

Question types

We can divide the question types into three overall: provided response questions such as multiple choice questions; questions that have a single answer; and open-ended questions. All question types can test learning objectives. Provided response questions are harder to write and easier to mark. The Macquarie University (2008) guide to writing exams has more on writing examination questions. Smith et al. (1996) has an interesting array of question types.

Since you want to find out what students can do (as opposed to what they can’t do), consider breaking down some questions into shorter steps, as well as asking students questions which require them to carry out extended synthesis of ideas. The point here is that if students cannot tackle a complex question, you won’t find out any more about what they can do by asking them to do three more such questions. Recall the SOLO taxonomy (which describes the different levels of structure that student responses may take) from Module 2. Also, on a practical note, questions broken up into steps, particularly if a re-entry point is given, can reduce the number of consequential errors you have to consider.

Task 10.2 Variety in assessment

What methods of assessment and types of questions are you accustomed to using? Write a short paragraph describing one or more of them and post it to the discussion board . Also identify one or more that you would consider using, and any concerns or queries you have about them. Post this also, and be ready to comment on other people’s questions about a method you have tried or have read about.

After this “nuts and bolts” discussion of some of the kinds of tasks, we return to considering the stages in the assessment cycle.

Standards

Standards may be stated at your university in terms of a grading policy. These are general and will need to be interpreted for your degree program and unit. It is not sufficient to state that a mark of 85-100 will be awarded a High Distinction - you need to identify to yourself, your students and (perhaps) an external review panel what your standards are for each program, unit and task.

Assessing students to standards should be contrasted with two other practices. First, we are not comparing students to one another and ranking them, as you do when ranking for scholarships, for example – we are comparing each of them to the standard. Second, there is no pre-determined proportion of students who will receive a passing grade or high distinction; this kind of ‘marking to the curve’ is called norm-referenced assessment.

Standards can cover general skills and professional skills, such as presenting written mathematics using Scientific Workplace (or LaTeX).

How do you communicate the standard required to students?

Sadler (1989) describes the challenge of standards communication:

“How to draw the concept of excellence out of the heads of teachers, give it some external formulation, and make it available to the learner, is a nontrivial problem.” (Sadler, 1989, p. 127).

He goes on to describe a number of strategies for communicating standards, which are included in this list (which also draws on other sources):

  1. Rubrics
    Assessment standards are often communicated by using a rubric, which is a table of descriptors of the requirements for different standards. These are given out to students before they complete the tasks. Writing a good rubric (which literally just means ‘explanation’), requires you to think about what you want students to achieve at the various levels. It also requires you to write tasks that will allow students to demonstrate the level required - that is, the questions need to test a variety of achievements. Rubrics are also useful as marking guides. Check out some examples of general rubrics; the general rubric for the knowledge domain is shown below in Table 2 (Graduate Skills, 2010).
  2. Sample answers
    One common and concrete way to communicate standards is to give the students previous tasks with sample answers (exemplars); you can use answers from previous students rather than answers you have written yourself. Students are usually pleased for you to post their answers online - it is best to only do this with good answers - but you will need to obtain their permission for this. There is some research that shows that weak students have never seen a HD answer, and so may not be aware of the standard required, or realise that it is attainable by students in their cohort.
  3. Grade descriptors
    These are descriptions of what is expected at each grade level. Generally your university grade descriptors will need to be interpreted for mathematics. This can be done at a department level.
  4. Teaching resources
    You may need to develop resources to show (model) the standards required. For example you may need to teach students to use Scientific Workplace so that they can present their mathematics in a professional manner.

Standards of Achievement rubric

Here is a generic rubric which you can adapt for your particular task. As you become familiar with the rubric, it becomes easier to use. There is no need for students to reach HD level across the three domains to receive an HD overall - again this depends on your learning objectives. See Wood (2009) for examples of rubrics used in a mathematics subject.

Table 2. Standards of Achievement for knowledge domains (Graduate Skills, 2010)

  Conceptual Procedural Professional
  Domain-specific and/or skills-specific conceptual knowledge - ‘knowing that' (i.e. concepts, facts, propositions - surface to deep) (e.g. Glaser, 1989) Domain-specific and/or skills-specific procedural knowledge - ‘knowing how' (i.e. specific to strategic procedures) (e.g. Anderson, 1993) Professional knowledge - ‘knowing for' (i.e. values, attitudes) related to practice (e.g. Perkins et al., 1993), includes graduate capabilities
Level 4 HD The concept is linked and integrated with other concepts, resulting in a new pattern of understanding. The depth and breadth of the concept is understood in such a way that the individual is inspired to re-organise other concepts, and motivated to make creative and innovative applications. Demonstrates the capacity to create/develop new valid procedures. Rules are applied in novel ways, or new rules are derived from deep understanding. Demonstrates a strategic view to enable innovative outcomes in complex situations.
Level 3 D The understanding of the concept is broadened and appreciated from different angles, and this elaboration is reflected in the ability to consider the concept in other contexts and from different perspectives. Demonstrates the ability to select appropriate procedures in a given context. Procedures no longer need to be given. Demonstrates the ability to adapt to new environments.
Level 2 C Some personal meaning has been extracted and their understanding reflects this internalised view. The concept has become a part of their knowledge. Nevertheless, the concept remains narrow and shallow and relatively disconnected from other concepts. Demonstrates the ability to apply given rules and procedures in a variety of contexts and to novel problems. Can evaluate a professional situation and identify key issues.
Level 1 P Demonstrates the ability to describe and define the basic concepts relating to the skill, subject matter, and/or knowledge domain, but has not demonstrated an ability to be able to elaborate or reflect on the meaning of the concept(s). Demonstrates knowledge of the rules and can practice the rules of a given procedure and/or skill. Demonstrates a basic understanding of processes and functions but only has abasic understanding of the significance of these in professional practice.
Level 0 F Demonstrates inability to describe and define the basic concepts relating to skill, subject matter, and/or knowledge domain. Demonstrates no knowledge of the rules and is not able to practice the rules of a given procedure and/or skill. Demonstrates no understanding of processes and functions or the significance of these in professional practice.

The types of knowledge represented in the rubric have been adapted from Billet (2009).

Task 10.3 Standards of achievement

  • List the ways that you currently tell students about the standards you require for different levels.
  • Think of two ways that you could do this better for your group of students. What about as a department?
  • Consider the rubrics available in the unit outline for the assessment tasks of this unit. Do you understand the difference in standards which are expected for each grade?

Marking

Marking in mathematics is generally considered objective when compared to, say, English, due to the more closed nature of the responses possible, hence it is sometimes believed to be more reliable. (Reliability refers to whether different markers would give the same result to the same piece of work, and/or whether similar performances get the same mark from a particular marker.) However, it is more subjective than you may think, particularly if you use types of assessment other than exams and problem-based assignments. There are several ways to make marking more reliable:

  • Design the questions well - so that you are testing the learning objectives, and so that the marking is efficient.
  • Design a marking guide to assist yourself and other markers with the consistent marking of tasks. Often a marking guide will be a rubric, which may be given to the students before they do the task so that it will guide their response to the task.
  • Have the same marker mark the whole cohort for one group of questions or the whole assignment. This has the added advantage of being able to pick up possible plagiarism in the cohort.
  • Run a preliminary meeting for markers, showing them how to interpret the marking scheme for real student answers. This is even more powerful if they actually all mark the same answer, and then you discuss it. This can be particularly effective for conveying to markers how you want open-ended questions to be marked for criteria like writing and reasoning (and not just ‘the answer’).
  • Double mark or sample mark.
  • As a marker, be prepared for unusual answers as these may be from the better students. Check with others if you are unsure about matching the answer to the marking guide.

One of the best ways to be explicit about your expectations is to have well-defined marking schemes, generally in the form of rubrics or annotated sample solutions.

Listen to this recording from an actor about moderation, based on an interview with an Australian academic (University of South Australia, 2011a). Moderation is a quality review and assurance process which supports the examination setting and marking activities. It involves using other academics and qualified staff to confirm that the examination tasks and marking are valid and reliable. Essentially, it is a checking process. This interview is part of an ALTC project on the moderation of assessment (which will be mentioned again at the end of the module). The project also produced this useful resource on marking guides (University of South Australia, 2011b).

Task 10.4 Creating a rubric for marking and feedback

Devise a marking guide for a task you have set or are marking (or that you wrote in Task 10.1). Set it up as a rubric. You can use this rubric to give feedback to students on their performance.

Feedback

In the context of summative assessment, feedback still plays a role. Some authors dismiss the feedback here as being ‘feed-out’, or feedback-as-explanation-for-grade (see, for example, Knight (2002)). But if feedback on summative assessment items addresses the process, not just the task accuracy, of a student’s performance, it can give them information they can use in doing future tasks. And think how hard it is to change students’ impressions of what is required in later subjects – the ‘summative’ assessment of earlier years is still acting formatively on them. If students can do well on exams in early years using an instrumental understanding, this shapes their approach to later exams; their mark, though the narrowest sort of feedback, has given them a message about what is valued.

You should provide feedback for summative assessment, both in terms of what you were looking for, but also with future learning in mind. Various forms are possible (see Module 6).

In student surveys, such as the Course Experience Survey (CEQ) and surveys at the end of semester, perceptions of feedback rate lower than other factors. The main factor that will improve the perceived quality of feedback is the response time. Students quite reasonably want feedback quickly before they have forgotten the task. This requires planning so that you:

  • Do not over-assess - assessment which is too long will be hard on the markers and take too long to return.
  • Plan the assessment and have the markers on hand to do it quickly.
  • Stick to deadlines. Post answers one or two days after the deadline so that students can self-assess.
  • Allow opportunities for peer- and self-assessment.

In a study at Macquarie University of 900 students (Rowe et al., 2008), all students said that feedback was important to them and that they took it seriously. Asked what they liked and disliked about the feedback they were currently receiving, some common themes emerged:

  • Students said that good feedback gave them a feeling of being taken seriously and respected as individuals.
  • Many expressed a strong dislike of feedback that took the form of simply giving the student a mark.
  • Students repeatedly said they wanted more feedback, though some also expressed sympathy with lecturers who were coping with large classes, and accepted that generous feedback might only be possible in more senior units.
  • Some students admitted to taking feedback less seriously when they had received a high mark.
  • A more prominent reason for not engaging with feedback was slow turnaround times. The most common suggestion students had for improving feedback was faster response times.
  • Interestingly, both domestic and international students expressed a liking for group feedback, where tutors or lecturers address a whole class or tutorial about general problems and difficulties arising in assignments and tests. They also liked it when tutors went through model answers to assignment questions.
  • In general they liked verbal feedback when it was generic and delivered to the class as a whole, because it allowed them to seek clarification; but preferred written comments when it came to their individual work. This suggests that an ideal arrangement would be to use both forms in combination.

Reflection

All of us need to reflect. As teachers, we reflect on how students have performed on each task. We reflect on how better to design our teaching and we reflect on how our students react to the learning environment; we should also reflect on the task itself. Often we are unhappy with the performance on a particular assessment task by some of our students.

How do we encourage students to reflect on their learning?

  1. Ask them. For many years John Shepherd at Macquarie University asked his students in final year actuarial studies to describe what they had learnt in their degree. This was a final year examination question. The students knew it would be in the examination so could prepare for it (Example 1 below). Peter Petocz in statistics at Macquarie University asks his students to report on their learning in their group project. This includes their learning about the group process as well as their learning of statistics (Example 2 below).
  2. Give them space. A really crowded curriculum and too many small tasks will have students scrambling to complete everything without time to put it all together properly.
  3. Do it yourself. Talk about how you reflect on mathematics and doing mathematics. You may also be required by your university to post a subject review that indicates your response to student feedback; take this seriously and show that your practice is reflective.

How do we reflect on our teaching and our students learning?

We cover this in Module 7 but in terms of assessment, reflection completes the cycle as we reflect on how well our assessment tasks gave us information on the achievement of our students.

  • Do you need to change the learning outcomes? (Note that you may not be able to do this if they are set at degree program level.)
  • Is there the right mix of tasks?
  • Were the assessment tasks well designed? Did you allow enough time for preparing them?
  • Was the workload for you and the students too high? Too low?
  • Did most students reach the minimum standard, and did some students shine?

It is tempting to respond to a problem in student performance by a sort of knee-jerk reaction that targets the teaching and the learning activities. Consider the assessment tasks as well. For this deliberation to be fruitful, it is important that you engage seriously with the variety of student responses to the questions you have written. You can learn to write better questions, and better marking guides, by reflecting on this experience.

Example 1. John Shepherd: reflection on your learning

John has had many teaching awards and is well known for great student outcomes and a student centred approach to teaching. Here is what he has sent through for you to consider.

Context. Course unit: ACST201 - A second year mathematics of finance subject; core for applied finance and some business students (finance stream) and a popular elective with many accounting and some economics students. Enrolment now about 800.

Examination question. The last question (one of six) in the final examination; students should have had about half an hour to answer it:

You receive a letter from one of your close friends who is also a Macquarie University student. Your friend says in the letter:

"I'm trying to plan my study program for next year. I'm thinking of enrolling in ACST201 as an elective unit. I know you did this subject this year and it will help me to make up my mind if you can tell me what you learned from ACST201. But don't tell me what the university calendar or the unit outline or the teacher said you were supposed to learn - tell me what you learned in the subject. I don't want to know what someone else said you were supposed to learn, but what you believe you did actually learn."

In 250 to 300 words, write your letter in reply, explaining to your friend what you have learned from ACST201 this year.

Example 2. Peter Petocz: reflections on working in a group

Peter uses the following questions to get students to reflect on their group work experience:

  • How did your team usually work together? Please give some specific examples.
  • What was your particular job (or jobs) for the project?
  • What did you feel were the best aspects of your project?
  • What problems did you face, and how did you address those problems?
  • What would you do differently next time you carried out a project of this type?
  • In what ways did carrying out this project help you (or not) in your learning of applied statistics?
  • What advice would you give to students in the next STAT270 group?
  • What advice would you give the lecturer of the next STAT270 group?

Task 10.5 Reflection on the assessment cycle

Reflect on a recent task in terms of the assessment cycle and think about whether you can make improvements to it for any of the stages (task design, standards definition, marking implementation or feedback).

Consideration of some special situations

Students come to our programs with a variety of backgrounds and traits. Inclusive practice is the term given to a consideration of these characteristics.

Transnational education and cultural issues

Transnational education (TNE) takes place when the learners are located in a country different from the one where the awarding institution is based. It raises many issues in terms of assessment and moderation of standards if you co-ordinate a unit in such a program. This moderation resource for TNE from UniSA (University of South Australia, 2011c) and developed through the ALTC, contains useful insights into the assessment issues arising in TNE, for moderation between campuses as well as between different markers at the same campus. Some of the considerations it addresses also apply to international students, or exchange students, in your classes.

Task 10.6 Cultural considerations

Listen to this short audio clip on exams (University of South Australia, 2011d).
How well do you think you explain your assessment tasks to students who are accustomed to one particular style?

Catering for students with disabilities: alternative assessment

Some students will need alternative assessment, such as using a computer or presenting results orally instead of in writing. (Note: generally the student’s request will come via your university’s diversity centre or disability officer, so that students do not have to repeatedly explain and document confidential information to each lecturer, and also to maintain equity between students.)

When designing your class activities and assessments, consider ways that you can make the tasks accessible to a range of students. Here are some useful resources and strategies:


Review and conclusion

Assessment is critical to student learning. How and what we assess shows them what we value and how they should allocate their time. Summative assessment should be efficient and effective for both students and staff. We have shown how the use of a taxonomy will help you with designing tasks; how the use of a rubric will help you communicate the standards required; and how the same rubric can be used for efficient and effective marking and feedback.

The rest is up to your creativity – try and enjoy designing tasks and making them interesting and challenging for your students! In the next module, we explore different ways of creating mathematics learning communities.


Relation to Assessment Task 3

In Assessment task three, one option is to design an item of summative assessment and the associated marking guide and feedback; this module relates directly to that task.

For full details of the options in Assessment task 3, the submission date and the marking rubric, please consult the unit outline.



References

  • Anderson, L. W., & Krathwohl, D. R. (Eds.) (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom's Taxonomy of educational objectives: Complete edition. New York: Longman.
  • Bergvist, E. (2007). Types of reasoning required in university exams in mathematics, The Journal of Mathematical Behavior, 26(4), pp. 348-370.
  • Billett, S. (2009, July), Workplace as a learning environment - Challenge for theory and methodology. Keynote: presented at Researching Work and Learning 6 (RWL6), Roskilde University, Denmark, 1st July 2009.
  • Bloom, B. S. (Ed.), Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of Educational Objectives, Handbook 1: Cognitive Domain. New York: Longman.
  • Board of Studies NSW. (n.d.). HSC Exam Papers: Past High School Certificate examination papers and notes from the Marking Centre. Retrieved February 22, 2011, from http://www.boardofstudies.nsw.edu.au/hsc_exams/papers.html
  • Graduate Skills. (2010). Standards of Achievement. Retrieved February 22, 2011, from http://www.graduateskills.edu.au/wp-content/uploads/2010/08/GraduateSkills_Standards_Collated.pdf
  • Hoadley, S. (2008) How to create exams: Learning through assesment. Retrieved from http://www.mq.edu.au/ltc/pdfs/FBE_Exams.pdf
  • Hounsell, D. (2007). Towards more sustainable feedback to students. In D. Boud & N. Falchikov (Eds.), Rethinking assessment in higher education (pp. 101-103). Abingdon, Oxon: Routledge.
  • Houston, K. (2001). Assessing Undergraduate mathematics students. In D. Horton (Ed.) The teaching and learning of mathematics at university level (pp. 407-422). New York: Kluwer Academic Publishers.
  • Knight, P. (2002) Summative assessment in higher education: practices in disarray. Studies in Higher Education, 27(3), pp. 275-286.
  • Mather, G., & Muchuatuta, M. (2011). How to teach with inclusive practice: learning through diversity. Retrieved from http://www.mq.edu.au/ltc/pdfs/LEAD_Inclusive_Practice.pdf
  • University of South Australia. (2011a). Moderation. (Audio file). Audio posted to: http://resource.unisa.edu.au/file.php/285/New_sound_file_2.mp3
  • University of South Australia. (2011b). Using marking schemes effectively. Retrieved from: http://resource.unisa.edu.au/file.php/285/MarkingSchemes_2010.pdf
  • University of South Australia. (2011c). Toolkit resources. Retrieved 22 August, 2011, from: http://resource.unisa.edu.au/course/view.php?id=285
  • University of South Australia. (2011d). Exams. (Audio file). Audio posted to: http://resource.unisa.edu.au/file.php/285/New_sound_file_5.mp3
  • Perkins, G., Beacham, N., & Croft, A. (2007). Computer aided assessment of mathematics for undergraduates with specific learning difficulties: Issues of inclusion in policy and practice. International Journal for Technology in Mathematics Education, 14(1), pp. 3-13.
  • Ramsden, P. (1984) The context of learning. In F. Marton, D. Hounsell and N. Entwhistle (Eds.), The Experience of Learning (pp. 144-164). Edinburgh: Scottish Academic Press.
  • Ramsden, P. (2003) Learning to Teach in Higher Education (2nd Ed.). London: RoutledgeFalmer.
  • Rowe, A.D., Wood, L.N. & Petocz, P. (2008). Engaging students: Student preferences for feedback. Higher Education Research and Development Society of Australia. Rotorua, NZ.
  • Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, pp. 119-144.
  • Sadler, D. R. (2010). Fidelity as a precondition for integrity in grading academic achievement. Assessment and Evaluation in Higher Education, 35(6), pp. 727-743.
  • Self And Peer Assessment Resource Kit. (n.d). Introduction. Retrieved February 22, 2011, from http://spark.uts.edu.au/index.php
  • Smith, G., Wood, L., Coupland, M., Stephenson, B., Crawford, K., & Ball, G. (1996). Constructing mathematical examinations to assess a range of knowledge and skills. International Journal of Mathematical Education in Science and Technology, 27(1), pp. 65-77.
  • Using marking schemes effectively. (2011) Retrieved 1 August, 2011, from: http://resource.unisa.edu.au/file.php/285/MarkingSchemes_2010.pdf
  • Wood, L. (2007). Classroom Notes: The transition to professional work, Australian Mathematical Society Gazette, 34(3), pp. 246-252.
  • Wood, L.N. and Smith, N.F. (2007). Graduate attributes: teaching as learning. International Journal of Mathematical Education in Science and Technology, 38(6), pp. 715–727.

Further reading


Updated: 10 Apr 2013
Feedback