Education Journal
Volume 4, Issue 3, May 2015, Pages: 106-110

The Impact of Game Assessment on Enhancing Student’s Performance

Ahmad L. El Zein1, Lobna Bou Diab2, *

1"Corporate Relations Office", Modern University for Business and Science, Beirut, Lebanon

2Faculty of Education, Modern University for Business and Science, Beirut, Lebanon

Email address:

(A. L. El Zein)
(L. B. Diab)

To cite this article:

Ahmad L. El Zein, Lobna Bou Diab. The Impact of Game Assessment on Enhancing Student’s Performance. International Journal of Economic Behavior and Organization. Vol. 4, No. 3, 2015, pp. 106-110. doi: 10.11648/

Abstract: There are various ways to assess students, but what differs is the validity of assessment and its impact on students’ life-long learning. This study will discuss the way assessment were applied in the past (traditional assessment) and the negative effect it exerts on students. In addition to that, the study aims to prove the worth of authentic assessment, specially the game-assessment, and its importance on students’ learning and performance since it improves intellectual and cognitive development that 21st century students need. Moreover, students achieve better results in playing a game since it reduces anxiety and engages them in the learning process.

Keywords: Game Assessment, Student Performance, Education, Development

1. Introduction

The purpose of this research is to examine the importance of game-assessment and its positive effect on the learning process and the deep understanding of students. The reason of choosing this kind of assessment is to prove its value and encourage using it in each and every school. The information presented in this article could be useful to anyone who is directly or indirectly related to education.

"Assessment performances are day-to-day activities that can also be authentic and engaging demonstrations of students’ abilities to grapple with the central challenges of a discipline in real life contexts." (Kulieke et al., 1990, p.2).

Teachers, students, parents and administrators have diverse thoughts concerning assessment strategies (Dietel, Herman, and Knuth, 1991). There are two ways to collect data about student learning; either by using the authentic assessment or the traditional one. Students develop creativity, critical thinking and problem solving abilities when teachers emphasize on their interests and needs using real life activities and various teaching strategies. There are various types of learners (visual, auditory, and kinesthetic learners) that teachers should consider and cater their needs when planning and assessing. As a result, traditional assessment (which is generally called testing) should be transformed to authentic assessment for better learning progress/process (Dietel, Herman, and Knuth, 1991). Due to the negative influence of the traditional assessment on curriculum and instruction, a variety of researchers and scholars have studied the disadvantages of it and ways to overcome them using authentic assessment. One example of authentic assessments is game-assessment in which students play a game and have fun without even knowing that they are being evaluated. Game-assessment has many advantages that facilitate grading and evaluating students’ performance for teachers, parents and students in addition to some limitations. So is it worth to change?

2. Literature Review

Student learning was only examined by testing in traditional schools (Dikli, 2003). Previously, teachers used tests, which are strict and standardized, to measure students’ points of strengths, points of improvements, and how much they have learned. (Edutopia Team, 2008; Baillie, ND) Then communities depended on the total number of points earned by students in those tests to evaluate the worth of their academic achievements. (Edutopia Team, 2008). According to Bailey (1998), traditional assessments are indirect and inauthentic. It may be helpful or essential in cases of national standards. Bers and Mittle ( 1994) enclose diverse styles of questions (MCQ, fill-ins, matching, essays, and sentence completions, etc.) that can be adopted and implemented quickly (Bers & Mittler, 1994). Traditional assessment is the most common way because it provides valuable information about the students learning but it is not the only method or the excellent technique to assess students. There are several disadvantages of traditional assessment since it’s not applicable nowadays due to the demands of 21st century that is having critical thinkers, problem solvers, decision makers, and inquiry lifelong learners. The Literacy and Numeracy Secretariat (2010) believe that such assessment is unsuccessful in considering students' progress which in return will decrease the levels of their improvement (Franklin, 2002). That means, tests border what can be measured; teachers assess what students do and not what they can do. (Bers & Mittler, 1994; Franklin, 2002). Law and Eckes (1995) indicated that most standardized tests evaluate only the lower-order thinking skills of the students. Likewise, Smaldino et al. (2000) declared that traditional assessment highlighted on student’s capacity of memorization and remembering. They can’t easily evaluate students' critical thinking skills, problem solving skills, and other capabilities. (Franklin, 2002). In the multiple-choice questions, specifically, there is a degree of guessing that of course lowers the validity of testing. (Bers & Mittler, 1994). Bailey (1998) also stated that in this kind of assessment there is no feedback given to students. Furthermore, it is only used as a summative evaluation not as a formative one; schools may even teach just for the exam. Bers and Mittler, (1994) and Law and Eckes (1995) emphasized on the same problem and declared that traditional assessment is single-occasion tests; that is, they assess what students can do at a specific time. What is more, tests are set to large groups of students thus they are not personalized and cannot be modified to meet the needs of every student (multiple intelligences, different learning styles, etc.) (Bers & Mittler, 1994).

Over and above, we must not forget test anxiety that majority of students go through. Anxiety, in general, is an incident that people come across every day (Rachman, 2004). Birenbaum and Nasser (1994) stated that test anxiety has an effect on people in every area in life whenever they are being assessed, judged, and ranked. Test anxiety became one of the most troublesome issues in schools where testing is carried out. (Shaked, 1996). Asonibare and Olayonu (1997) and Okwilagwe (2001) announced that students performed more weakly in schools these days compared to what they used to do in the past due to an increase in high stakes testing in recent times. This caused students to be stressed and anxious prior to any test (Putwain, 2008). Students recently relate outcomes to the view of being tested, leading to stress in performing and terror of failing. (Black, 2005). Their mental condition and sense of emotional stability can turn out to be harmed due to the stress while being tested. Test-anxious students may excessively have a concern about the consequences of failure. (Spielberger & Vagg, 1995). After doing a test, some children may demonstrate a number of behavioral troubles such as escaping, crying, sickness, and outburst of anger (Cheek, et al. , 2002). Paulman and Kennelly (1984) and Wittmaier (1972) declared that low performance of students that have test anxiety branches from their poor knowledge of materials and their attentiveness that they are not ready for the test. It reduces the performance of those who experience it (Sarason, 1980). High-anxious children often do the tests too quickly when tested under time pressure which leads to low outcomes in normal testing situations. (Plass & Hill, 1986). Hancock (2001) said that this problem of anxiety before testing and low academic achievement is even bigger for students with disabilities. As a result, there is a high possibility that a large number of students will be left behind and not given good concentration to attain goals. Furthermore, test scores cannot tell about the progression of a student. Similarly, they cannot tell what particular difficulties the students had during the test. (Dikli, 2003)

Alternative assessments, such as authentic ones, aim to relate assessment to the real-world experience of the learners. The task needs to be meaningful in order to be authentic. Simonson et al. (2000) and Winking (1997) also points out that alternative assessments require higher order thinking skills so that students can work out real-life related problems. In addition to that, the connotation of "knowing" has changed from recalling and repeating information to finding it, evaluating it and using it at the right time and in the right situations (Institute by play, 2015). Education, in the beginning of the 20th century, highlights on the achievement of critical skills and data (reading, writing, calculation...). Many experts think that success in the 21st century relies on education that cares for higher-order skills (ability to think, solve complex problems or interact critically through language and media) (Institute by play, 2015). Skills such as problem solving, communication, collaboration, and creativity, as well as personal attributes are very significant and have been marked as "21st century skills". (Casner-Lotto & Barrington, 2006; Fadel, 2011). These skills aren't considered in high-stakes tests; however, they are essential since companies are searching for staffs that have those skills and the capability to work in teams with coworkers (Edutopia Team, 2008). Thus, there is a necessity to reconsider the methods used to assess students’ performance to make sure that they will graduate with qualification to meet the demands of the 21st century workplace. Alternative assessment methods suggest innovative ways of communicating what values most about superior education; they can encourage and inspire students to discover themselves and the world around them. (Lombardi, 2008). Nowadays, there is a rising interest in and examination of the utilization of games to evaluate 21st century skills (Shaffer et al., 2009; Shute, 2011). Research suggests that assessment forms students’ perceptions of learning in advanced education (Ramsden, 1992) and that students have to identify assessment methods in order to be effective learners (Elwood & Klenowski, 2002). Students need to identify the methods to assist improvement and development leaded by unambiguous, realistic, and precise feedback; in addition to knowing the standards by which they will be judged. Without worth feedback on past performance, there are no roots to correct misconceptions or develop understanding (Lombardi, 2008). One goal of assessment is to find out whether educational programs are adding to students’ intellectual development and interest in the subject matter or not. (Palomba & Banta, 1999)

Piaget (1965) claimed that the method in playing games could make children recognize the surrounding they exist in and build their global imagination. Based on this point of view, many started to appreciate that people could reach individual growth and successful learning during game playing. Smilansky (1968) and Kafai (1996) also argued that games could assist students to build their personal thoughts and knowledge while achieving objectives. They are proposed to create complex problem that players know through independent sighting. They are intended to carry just-in-time learning and to use information to help players understand how they are performing, what they want to work on and where to go next. (Institute by play, 2015). Goddard et al. (2001) argued that games provide compound settings in which content, skills and attitudes have a vital function during playing. It provides ongoing practice through which students may get better accuracy and better recalling. (Driskell et al., 1992; Brophy & Good, 1986). Squire (2002) declared that games had educational potential from both cognitive and social positions. Games sustain, emphasize and speed up the learning process, and maintain higher-order cognitive growth. (Green & Bavelier, 2003; Klabbers, 2003). Coyne (2003) claimed that students collect data needed, apply it and engage in the learning process. Walliser (1998) said that games stimulated critical thinking, information gathering and sharing and collective problem solving. Mutual trust and communication skills had effects on the interactions (Stanulis & Russell 2000). When a student (or an adult for that matter) plays a game, he/she exercises his/her mind by putting himself/herself into a simulation of real-life situations. When a game is played, real-life-like decisions are made, solutions analyzed and problems solved (Denis & Jouvelot 2005). Students can experience learning by doing. But the main reasons for this increasing interest are the average success, the motivation of players and their deep engagement while playing (Hlodan 2008). Games have the power of engaging people in fun ways, providing interaction, offering opportunities for problem solving, and other essentials that gave the users structure and motivation while promoting involvement and creativity (Journal of Educational Technology, 2007).

Many studies focused on presenting the advantages of using or better adopting game-based learning for supporting motivation in learning and for improving skills and competences (Dondi & Moretti, 2007).

Games are vital when used as an assessment tool for various reasons. First, they permit students to experience real-life situations and assess the application of knowledge and skills. Second, games are attracting and inspiring which makes them more valid (Schmit & Ryan, 1992; Sundre & Wise, 2003). This type of assessment sheds lights on the growth and performance of the student. That is to say, if a learner was unsuccessful in performing a given task at a particular time, he/she still has the chance to express his/her ability at a different time and different situation (Dikli, 2003). Since alternative assessment is developed in situations and over time, the teacher has an opportunity to measure the strengths and weaknesses of students in a variety of areas and circumstances (Law and Eckes, 1995). This means looking at the student’s results rather than grades can allow instructors to get further insights regarding students’ knowledge and skills (Niguidila, 1993). Authentic assessments let learners communicate their knowledge of the material in their own way using various intelligences. (Brualdi, 1996). Reeves (2000) believed that the importance of performance assessment is the capability of students in relating his/her knowledge and skills to real life simulations. Interested students will do faster and learn more in educational settings. Additionally, past studies stated that playing games improve intellectual and cognitive development unlike the traditional settings. (Pange, 2003; Perry & Ballou, 1997)

Analyzing the literature review we reach two Hypotheses:

H1: Using game-assessment will decrease anxiety.

H2: Decreased anxiety will increase student’s performance.

3. Methodology

To verify the hypothesis, a questionnaire was submitted to schools in Aley to 121 students. The scale used was a scale used previously by other researchers for time and financial constraints. The results were as follows:

Table 1. The results of H1.

β Std. Error t p
0.9509 0.3408 2.5662 0.0054

H1 measures the effect of game assessment on anxiety. It is formulated as "Using game-assessment will decrease anxiety". The effect is significant (p < 0.05), therefore, there is an effect of game assessment on anxiety.

Consequently H1 is validated.

Table 2. The results of H2:

β Std. Error t p
0.9809 0.3508 2.7962 0.0061

H2 measures the effect of anxiety on student performance. It is formulated as "Decreased anxiety will increase student’s performance". The effect is significant (p < 0.05), therefore, there is an effect of anxiety on student performance.

Consequently H2 is validated.

4. Conclusion

To sustain students’ motivation during assessment activities, cooperative group testing and game approaches may be used (Russo & Warren, 1999). In a cooperative framework, all individuals are motivated to actively play a part in solving a problem (Damon & Phelps, 1989; Tudge & Rogoff, 1989). Solutions to problems and learning may derive from those peer interactions that might not occur in personal testing situations (Vygotsky, 1978). Advantages of group testing include satisfaction and preference for that assessment approach (Stasson et al., 1991). In this way, anxiety levels are reduced and cooperative skills will be developed (Zimbardo et al., 2003; Russo & Warren, 1999). Cortright et al. (2003) found that cooperative testing, compared with individual testing, improve student’s retention of learned materials which aligns with our study. Through group interaction, ideas are combined, errors are corrected, good points are positively reinforced, and higher-order thinking is stimulated (Tjosvold & Field, 1983) which supports the results we reached. Gigone and Hastie (1993) revealed that shared information brought into a three-member group is considered more heavily than the information held by one individual.


  1. Asonibare, J. B., & Olayonu, E. O. (1997). Locus of control, personality type and academic achievement of secondary school students in Offa and Oyun local governments. Nigerian Journal of Clinical and Counselling Psychology, 3(1), 14-23.
  2. Bailey, K. M. (1998). Learning about language assessment: dilemmas, decisionjs, and directions. Heinle & Heinle: US.
  3. Baillie, M. (ND). Traditional Assessments.
  4. Bers, T.H. & Mittler, M.L. (1994). Pros and Cons of Tools for Doing Assessment. In text citation: (Bers & Mittler, 1994).
  5. Birenbaum, M., & Nasser, F. (1994). On the relationship between test anxiety and test performance. Measurement & Evaluation in Counseling & Development, 27, 293-302.
  6. Black, S. (2005). Test anxiety. American School Board Journal, 192(6), 42-44.
  7. Brophy, J.E. & Good, T.L. (1986). Teacher behavior and student achievement. In Handbook of Research on Teaching (ed. M.C.Wittrock), pp. 328–375. Macmillan, NewYork.
  8. Brualdi, A. (1998). Implementing performance assessment in the classroom. Practical Assessment, Research & Evaluation, 6(2). Available online:
  9. Casner-Lotto, J., & Barrington, L. (2006). Are they really ready to work? Employers’ perspectives on the basic knowledge and applied skills of new entrants to the 21st century US workforce. New York, NY: The Conference Board.
  10. Cheek, J. R., Bradley, L. J., Reynolds, J., & Coy, D. (2002). An intervention for helping elementary students reduce test anxiety. Professional School Counseling, 6(2), 162-165.
  11. Cortright, R. N., Collins, H. L., Rodenbaugh, D. W. & DiCarlo, S. E. (2003) Student retention of course content is improved by collaborative group testing, Advances in Physiology Education, 27, 102–108.
  12. Coyne, R. (2003). Mindless repetition: learning from computer games. Design Studies 24, 199–212.
  13. Damon, W. & Phelps, E. (1989) Critical distinctions among three methods of peer education, International Journal of Educational Research, 13, 9–19.
  14. Dietel, R. J., Herman, J. L., & Knuth, R. A. (1991). What does research say about assessment? NCREL, Oak Brook.
  15. Dikli, S. (2003). Assessment at a distance: Traditional vs. Alternative Assessments.
  16. Driskell, J.E., Willis, R.P. & Cooper, C. (1992). Effect of overlearning on retention. Journal of Applied Psychology 77, 615–622.
  17. Edutopia Team. (2008).How Should We Measure Student Learning? The Many Forms of Assessment:There is more than one way to measure a student's abilities.
  18. Elwood, J. & Klenowski, V. (2002) Creating communities of shared practice: the challenges of assessment use in learning and teaching, Assessment and Evaluation in Higher Education, 27(3), 243–256.
  19. Fadel, C. (2011). Redesigning the curriculum. Center for curriculum redesign. Retrieved from:
  20. Franklin, J. (2002). Assessing assessment: Are alternative methods making the grade?.
  21. Gigone, D. & Hastie, R. (1993) The common knowledge effect: information sharing and group judgment, Journal of Personality and Social Psychology, 65, 959–974.
  22. Goddard, L., Dritschel, B. & Burton, A. (2001). The effects of specific retrieval instruction on social problem-solving in depression. British Journal of Clinical Psychology 40, 297–308.
  23. Green, S.C. & Bavelier, D. (2003). Action video game modifies visual selective attention. Nature 423, 534–537.
  24. Gros, B. (2007). Digital games in education: the design of games based learning environments. Journal of Research on Technology in Education 40, 23–39.
  25. Hancock, D. R. (2001). Effects of test anxiety and evaluative threat on students' achievement and motivation. The Journal of Educational Research, 94(5), 284-290. doi: 10.1080/00220670109598764
  26. Institute by play. (2015).
  27. Kafai, Y.B. (1996). Gender difference in children’s constructions of video games. In Interacting with Video (eds P.M. Greenfield & R.R. Cocking), pp. 39–66. Ablex, Norwood, NJ.
  28. Klabbers, J. (2003). The gaming landscape: a taxonomy for classifying games and simulations. In Level Up Digital Games Research Conference (eds M. Copier & J. Raessens), pp. 54–67. University of Utrecht, Utrecht, the Netherlands.
  29. Kulieke, M., Bakker, J., Collins, C., Fennimore, T., Fine, C., Herman, J., Jones, B.F., Raack, L., & Tinzmann, M.B. (1990). Why should assessment be based on a vision of learning? [online document] NCREL, Oak Brook: IL.
  30. Law, B. & Eckes, M. (1995). Assessment and ESL. Peguis publishers: Manitoba, Canada.
  31. Literacy and Numeracy Secretariat. (2010). Literacy. Niguidula, D. (1993). The digital portfolio: a richer picture of student performance [online document]. CES National. Available online:
  32. Lombardi, M. M. (2008). Making the Grade: The Role of Assessment in Authentic Learning. Okwilagwe, E. (2001). A causal model of undergraduate students’ academic achievement. Journal of ICEE and NAPE, 1(1), 1-13.
  33. Palomba, C. A. & Banta, T. W. (1999) Assessment essentials: planning, implementing, and improving assessment in higher education (San Francisco, CA, Jossey-Bass).
  34. Pange, J. (2003). Teaching probabilities and statistics to preschool children. Information technology in childhood education annual, 1, 163-173.
  35. Paulman, R. G., & Kennelly, K. J. (1984). Test anxiety and ineffective test taking: Different names, same construct. Journal of Educational Psychology, 76, 279-288.
  36. Perry, E. L., & Ballou, D. J. (1997). The rold of work, play, and fun in microcomputer software training. ACM SIGMIS Database, 28(2), 93-112.
  37. Piaget, J. (1962). Play, Dreams and Imitation in Childhood. W. W. Norton, NewYork.
  38. Piaget, J. (1965). The Moral Judgment of the Child. Free Press, NewYork.
  39. Plass, J., & Hill, K. T. (1986). Children’s achievement strategies and test performance: The role of time pressure, evaluation anxiety, and sex. Developmental Psychology, 22, 31-36.
  40. Putwain, D. W. (2008). Test anxiety and GCSE performance: The effect of gender and socio-economic background. Educational Psychology in Practice, 24(4), 319-334. doi: 10.1080/02667360802488765
  41. Rachman, S. (2004). Anxiety (2nd ed). Routledge: New York, NY.
  42. Rahmat, R.B. & Sulaiman, N. (ND).21st Century Social Studies Assessment: Disadvantages of Alternative Assessment.
  43. Ramsden, P. (1992) Learning to teach in higher education (London, Routledge).
  44. Reeves, T. C. (2000). Alternative assessment approaches for online learning environments in higher education. Educational Computing Research, 3(1) pp. 101-111.
  45. Russo, A. & Warren, S. H. (1999) Collaborative test taking, College Teaching, 47, 18–20.
  46. Sarason, I. G. (1980). Test anxiety: Theory, research and applications. Hillsdale, NJ: Erlbaum.
  47. Schmit, M. J., & Ryan, A. M. (1992). Test-taking dispositions: A missing link? Journal of Applied Psychology, 77(5), 629.
  48. Shaffer, D. W., Hatfield, D., Svarovsky, G. N., Nash, P., Nulty, A., Bagley, E., … Mislevy, R. (2009). Epistemic network analysis: A prototype for 21st-century assessment of learning. International Journal of Learning and Media, 1(2), 33-53. Retrieved from:
  49. Shaked, Y. (1996). During the test I am in a shock. Marive (Israeli daily newspaper), A1, p. 6-7.
  50. Shute, V. J. (2011). Stealth assessment in computer-based games to support learning. Computer games and instruction. Charlotte, NC: Information Age Publishers. Retrieved from
  51. Simonson, M., Smaldino, S., Albright, M. and Zvacek, S. (2000). Assessment for distance education (ch 11). Teaching and Learning at a Distance: Foundations of Distance Education. Upper Saddle River, NJ: Prentice-Hall.
  52. Smilansky, S. (1968). The Effects of Sociodramatic Play on Disadvantaged Preschool Children. JohnWiley, NewYork.
  53. Spielberger, C. D. & Vagg, P. R. (1995). Test anxiety: Theory, assessment, and treatment. Philadelphia, PA: Taylor & Francis.
  54. Squire, K. (2002). Cultural framing of computer/video games. Game Studies 2, Available at:
  55. Sundre, D. L., & Wise, S. L. (2003). Motivation filtering’: An exploration of the impact of low examinee motivation on the psychometric quality of tests. Aannual meeting of the National Council on Measurement in Education, Chicago, IL. Retrieved from:
  56. Russo, A. & Warren, S. H. (1999) Collaborative test taking, College Teaching, 47, 18–20.
  57. Tjosvold, D. & Field, R. H. G. (1983) Effects of social context on consensus and majority vote decisions making, Academy of Management Journal, 26, 500–506.
  58. Tudge, J. & Rogoff, B. (1989) Peer influences on cognitive development: Piagetian and Vygotskian perspectives, in: M. H. Bornstein & O. S. Bruner (Eds) Interaction in human development (Hillside, NJ, Lawrence Erbaum), 17–40.
  59. Winking, D. (1997). Critical issue: Ensuring equity with alternative assessments [online document]. NCREL (North Central Regional Educational Laboratory), Oak Brook: IL. Available online:
  60. Wittmaier, B. (1972). Test anxiety and study habits. Journal of Educational Research, 65, 852-854.
  61. Vygotsky, L. S. (1978) Mind in society: the development of higher psychological processes (M. Cole, V. John-Steiner, S. Scribner, & E. Souberman, Eds. & Trans.) (Cambridge, MA: Harvard University Press).
  62. Zimbardo, P. G., Butler, L. D. & Wolfe, V. A. (2003) Cooperative college examinations: more gain, less pain when students share information and grades, Journal of Experimental Education, 71, 101–125.

Article Tools
Follow on us
Science Publishing Group
NEW YORK, NY 10018
Tel: (001)347-688-8931