Select Page

Adam Hain

CMU DET Program Cohort 4

Critical Review of Literature #1

Article Reviewed:

Erhel, S., & Jamet, E. (2013). Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Computers & Education, 67, 156–167.


  1. Identify the clarity with which this article states a specific problem to be explored.

Erhel & Jamet (2013) do not state a research problem explicitly.  However, throughout the article, the general research questions are woven in various places.  For example, in the abstract, they state the purpose of their first experiment is to “identify the conditions under which DGBL is most effective” (Erhel & Jamet, 2013, p. 157).  This provides the reader with a general sense of the problem addressed.  Later, in the concluding discussion, they state the purpose of the two experiments was to “assess the impact of type of instruction on cognitive information processing in DGBL” and to “answer the question ‘Is deep learning compatible with serious games?’” (p. 164-165).  While these examples are somewhat abstract, it would be inaccurate to claim the authors lack clarity as to the purpose of the study. The introduction to each experiment additionally provides specificity on what was examined.  However, the research questions are not explicitly stated in a consistent way

While implicit in the text, I feel it is more effective if the research problem is clearly specified, quickly providing the reader with the topic and specific questions investigated as well as limits to the research conducted.  This provides clarity to determine if the questions investigated are relevant to the reader.  This study provides all of that information, but does not provide a specifically delineated section.  This forces the reader to hunt for the research questions in various places.

  1. Comment on the need for this study and its educational significance as it relates to this problem.

The authors assert that much of the research on Digital Game-Based Learning (DGBL) has compared gaming environments to traditional online multimedia modules.  A key feature of this research is that it adopts a value-added approach – one that compares aspects of games to each other, rather than comparing games to other multimedia.  They argue the latter is an unfair comparison and presents numerous confounding factors.  Hence, they opt for the value-added approach which they consider more rigorous.  Based on this approach, they test the effects of specific features of the learning environment.  Instructions and feedback are the primary value-added features addressed and integrated into the environment created for this study.

The authors argue that DBGL has a positive effect on motivation, and a mixed effect on overall learning, based on previous literature. Overall, the researchers are attempting to find if a specific combination of features retain the positive effects of motivation, while minimizing the negative effects of incidental learning caused by game-oriented instructions.  This research does ultimately provide clarity as to the conditions under which DGBL is most effective, in terms of instructions and feedback and is significant for that reason.

  1. Comment on whether the problem “researchable”? That is, can it be investigated through the collection and analysis of data?

According to the authors, stronger rigor is the strength of the value-added approach.  Compared to a media-comparison approach, they argue this approach is less susceptible to confounding factors and better investigated through the collection of relevant comparative data on the affordances of specific features.

I would agree that this approach is better able to define which features of DGBL are most effective.  I believe various DGBL features are best investigated using this value-added approach.  However, comparing DGBL to conventional media is also still valid in my opinion, and the authors create a false dichotomy by presenting the two approaches in this way.  Various game features should be tested to improve gaming environments.  However, some gaming environments may still prove less effective when tested against traditional interactive media, providing no significant difference in learning outcomes.  Since games are often expensive to produce, the media comparison approach can provide valuable information to institutions with limited resources.  Research may show traditional interactive media sufficient to use with various content domains, but show gaming superior in others.  In this way, resources can be allocated appropriately, integrating gaming where it is most effective.

Theoretical Perspective and Literature Review

  1. Critique the author’s conceptual framework.

To effectively critique the author’s conceptual framework, criteria must first be established for examination.  Antonenko (2015) offers a discussion of the value of conceptual frameworks in educational technology research, providing definitions from the literature and a critique of the varied terminology used to describe conceptual frameworks.  For example, the terms “conceptual framework” and “theoretical framework” are often used interchangeably.  Antonenko argues this is inaccurate, that theories are collections of related concepts and are distinguished from conceptual frameworks by including the assumptions, interests and beliefs of the researcher, among other differences.  Theories are frameworks containing concepts, while conceptual frameworks are built of a combination of theoretical frameworks, individual concepts, and observations, “that are custom designed by researchers based on personal assumptions and epistemological beliefs, experiential knowledge, and existing (formal) theories for each individual study” (Antonenko, 2015, p. 57).  While Erhel and Jamet (2013) do not define a conceptual framework explicitly, in adopting this perspective we can examine the conceptual framework presented.

The article defines three threads in the literature review, upon which a conceptual framework is built.  These are: motivational theory (specifically research showing DGBL increases motivation), the value-added approach to DGBL analysis, and use of instructions to improve learning effectiveness in DGBL.  Later, as part of the second experiment, a discussion of feedback is added to the framework, for a total of 4 primary strands.  The incorporation of these constructs into a custom framework is appropriate to the questions addressed in the research.  All 4 of these strands are integrated into the research process and discussion of results in a meaningful way.   All strands are represented: the research design is based on the value-added approach, measures of motivation are collected, and instructions are an independent variable.  Feedback is integrated into Experiment 2.  By integrating these 4 strands into a conceptual framework, the authors provide context for the research.

The primary problems with the literature review and conceptual framework are the organization of information in Section 1.3, and an overall bias for positive effects of DGBL found throughout the review.   While researchers can be expected to incorporate assumptions and viewpoints into a conceptual framework, several instances in the article reflect a belief in the benefit of DGBL in spite of mixed evidence.  Beginning with section titled “1.3 Benefits of digital learning games compared with conventional media (media comparison approach)” we see examples of both (Erhel & Jamet, 2013, p. 157).  The term “benefits” is used in the title, but the content of the section shows results are mixed.  The next sentence provides another example of bias: “Many researchers agree that digital learning games have everything it takes to become an effective learning medium” (Erhel & Jamet, 2013, p. 157).  Vague statements of this type are found throughout the section, suggesting advocacy rather than objectivity.  Further, the first half of the section is confusingly organized, presenting evidence that appears contradictory and does not support the title of the section.  While the prior research discussed is relevant, the organization is taxing on the reader.

5. How effectively does the author tie the study to relevant theory and prior research? Are all cited references relevant to the problem under investigation?

The research cited pertains directly to the research questions.  It is well-referenced and draws from numerous sources.  The cited articles build a case for design choices made by the authors.  The sections on motivation and instructions are well constructed.  However, the Section 1.3 could again be improved.

Offering more examples of the value-added approach from the literature might strengthen the argument of superior rigor of this approach.  It may also be possible to explain the choice to use this approach without dismissing the media comparison approach.  For example, Mayer (2015) suggests value-added and media comparison approaches are two of three equally important strands of research in DGBL, with the third being cognitive consequences of existing games.  The dichotomy presented by the authors may be unnecessary and a more descriptive explanation would be equally effective.

  1. Does the literature review conclude with a brief summary of the literature and its implications for the problem investigated?

It does not.  Instead, the researchers provide a summary at the beginning of each experiment, providing context for that experiment from the literature review.  These short introductions restate key points of the review before describing the questions addressed in each experiment.  However, the conclusion of the review does conclude with the following:

Whichever interpretive framework we choose to apply, the message is that learners who do not actively invest in information processing are liable to engage in merely surface learning and achieve only modest learning performances. This is an all too likely scenario in an incidental learning context involving DGBL, where the instructions given to learners encourage them to play rather than to learn. (Erhel & Jamet, 2013, p. 158)

While this passage refers specifically to varied cognitive processing frameworks, it could be easily adapted to articulate the authors’ overall conceptual framework, integrating adding the elements of motivation and feedback.  This would improve the conclusion of the literature review, by describing how the elements are connected.

  1. Evaluate the clarity and appropriateness of the research questions or hypotheses.

Each experiment’s introduction section discusses their respective question and hypothesis.  The research question in Experiment 1 is present in that introductory section.  Instead of hypothesis, the researchers use the term “assumptions”.  The tests are difficult to tease out, and would benefit from the authors simply stating what hypotheses will be tested in a bulleted list, especially since there are 5 measures of data being collected.

Experiment 2 introduces feedback, which is essentially another strand of the conceptual framework, and which might be better placed in the literature review.   Again, the clarity would be increased by clearly listed research questions.  However, in both cases, the specific questions and hypotheses are appropriate to the overall research question and adhere to the value-added approach – the primary problem is the lack of clarity with which they are stated.

Research Design and Analysis  

  1. Critique the appropriateness and adequacy of the study’s design in relation to the research questions or hypotheses.

Several hypotheses are threaded into the introductions to each experiment, though they are described using various terms such as “ascertain”, “test”, and “predict”.  This makes it difficult to directly link the hypotheses to the measures used in the study.  However, in Experiment 1, the authors aim to determine if the type of instruction in a DGBL environment has an effect on the chosen data measures in terms of learning outcomes, goal formation, and motivation.  In Experiment 2, they conducted basically the same experiment with the addition of knowledge of correct response (KCR) feedback to the environment, using the same measures to look at effects on these domains.

There were 5 measures chosen to obtain data.  These measures were a pre-test, recall quiz, knowledge questionnaire, and motivation questionnaires testing learning goals and intrinsic motivation.  The measures are appropriate to provide data relevant to the research questions, assuming the instruments used were reliable.  Reliability details are not included.  However, these measures do address the domains of knowledge and motivation.

In terms of the game environment tested in the design, it is unclear how this specifically qualifies as a game.  The authors offer a few definitions of DGBL in the literature review, first as “a competitive activity in which students are set educational goals intended to promote knowledge acquisition” (Erhel & Jamet, 2013, p. 156).  The environment described does not feature competition or provide set goals. A second set of definitions, from Mayer & Johnson (2013) are more detailed:  “(1) a set of rules and constraints, (2) a set of dynamic responses to the learners’ actions, (3) appropriate challenges enabling learners to experience a feeling of self-efficacy, and (4) gradual, learning outcome-oriented increases in difficulty” (p. 156).  Experiment 2 offers dynamic responses, but the other features are not present.  In fact, participants are invited to “play” in the gaming instruction condition, suggesting an absence of rules and constraints.  So, by the authors stated definitions, it is questionable whether a gaming environment was created.  While the value-added approach was used effectively to examine the features and effects of instructions and feedback, it could be argued these elements were tested in a traditional interactive eLearning module, not a gaming environment.

  1. Critique the adequacy of the study’s sampling methods (e.g., choice of participants) and their implications for generalizability.

The authors state that the participants in both cases “were recruited from a pool of students” (Erhel & Jamet, 2013, p. 159), but offer no detail on how they were recruited or what the characteristics of the pool was.  A convenience sample consisting of volunteers (McMillan, 2012) might possibly be the method used.  Health professions students were excluded, as this might bias the sample.  The two experiments used different sets of participants.  The sample size seems acceptable, at approximately 45 participants per study.  The specificity of the sample might limit generalizability as these were primarily third year college students in Europe.

  1. Critique the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).

There were 3 phases in each experiment.  The first was a pre-test designed to screen for participants with high previous knowledge.  In the second phase, participants interacted with the learning environment.  They received instruction type based on experimental or control group, viewed the presentation material and completed a short quiz after each section. In the third phase, participants completed a motivation questionnaire assessing goal orientation and intrinsic motivation as well as a separate questionnaire assessing knowledge.

The procedures are well-documented.  The authors define the processes involved in conducting the experiments and describe how each of the measures are collected.  However, in terms of the learning environment, more detail could be offered.  As mentioned above, there is little to suggest that this is a gaming environment – the media segments are even termed “presentations”.  More detail on the ASTRA environment might demonstrate the gaming features included.

  1. Critique the appropriateness and quality (e.g., reliability, validity) of the measures used.

There were 5 data measures collected: a pre-test, recall quizzes, a knowledge questionnaire, and two measures of motivation, one of learning goals and another of intrinsic motivation.  All instruments were created by the researchers.

Validity is described by McMillan as “a judgment of the appropriateness of a measure for the specific inferences or decisions that result from the scores generated by the measure” (2012, p. 131).  He argues “it is the inference that is valid or invalid, not the measure” (2012, p. 131).  Using the measures above, the authors argue evidence of motivation and knowledge can be accurately inferred.  This may be true, however with little detail about the instruments used, and since they are not externally validated, this inference must be taken accepted cautiously.  That said, the authors appear to exercise sound judgment and seriousness in instrument construction, as evidenced by their question examples.   Also, in the case of the pre-test, questions used were created by a medical expert.

Randolph (2007) defines threats to internal validity as “factors that can make it appear that the independent variable is causing a result when in fact it is a different variable or set of variables that is causing the result. In other words, threats to internal validity in experimental research are hidden causes” (p. 49).  He defines 7 types: history, attrition, regression to the mean, maturation, instrumentation, testing and selection.  None of these appear to be threats in this study.

Issues of measurement reliability and error are not discussed by the authors.  Potential sources of measurement error are numerous can result from the test environment, or be internal to the subject (McMillan, 2012).  However, there is no evidence any of these occurred in this study.

12. Question 12 not addressed.

Interpretation and Implications of Results

    13. Critique the author’s discussion of the methodological and/or conceptual limitations of the results.

            The authors list 3 limitations to the research.  First is the game environment.  As discussed earlier, I would agree with this limitation.  The primary reason the game environment is listed as a limitation is because of the lack of interactivity.  In order to create a game environment, more interactivity would indeed be required.  However, interactivity alone would not produce a game environment, by the authors own cited definitions.  It appears the authors have created a somewhat constructivist (freely explored, non-linear), traditional eLearning environment. It is a one-way distributed push of information, created (as evidenced by the figures) in Articulate Storyline, an eLearning authoring platform typically used to create online presentation modules.  The invitation to “play” in the learning environment, does not make that environment a game any more than an invitation to play with a math book would change the inherent nature or the context of that textbook.  However, the invitation to play does change the nature of the student/environment interaction, as evidenced by the results.  I don’t believe this limitation invalidates the results, it simply confuses the reader because the lack of gaming features is not addressed.

The next limitation listed is the high participant scores on the quizzes.  This seems a valid limitation as well.  This problem may have been avoided by running a small pilot prior to the experiments, to test the various knowledge collection instruments.

The third limitation listed was the use of purely offline data.  Online data, such as log files, may have better informed the research, but it likely the researchers simply did not have the resources to collect and analyze these data.  With 5 measures of data, log files may have proved impractical to include.

Another limitation, not mentioned in the article, may be the characteristics and collection of the sample population.  The participants came from a very specific group, which may limit the generalizability of the results.

  1. How consistent and comprehensive are the author’s conclusions with the reported results?

The authors primary conclusion is that while Experiment 1 showed learning-type instructions provide superior knowledge outcomes, Experiment 2 compensated for this in the gaming condition, by adding feedback.  In this way, the motivational advantages of the gaming environment were retained, while also keeping the improved knowledge outcomes.  This reflects the authors’ implicit goal of finding a way to show gaming environments improve learning.  As discussed earlier, this goal is evident in several passages, and while it may be described as bias, it could also be argued researchers should follow their interests.

In this case, the approach seems to have been validated.  The researchers have found a combination of features that are successful in improving the outcomes of the environment created in the study.  Again, I would be caution characterizing this as a gaming environment.  I would argue the results apply more realistically to online presentation-based eLearning environments.

  1. How well did the author relate the results to the study’s theoretical base?

            There were several topics presented in the author’s conceptual framework.  These include motivation in DGBL, media comparison vs. value-added approaches, instruction types, deep and shallow learning, cognitive load, and feedback.  All are mentioned briefly in the discussion of results except the choice to use the value-added approach.

Primarily, this discussion connects previous literature to the results.  I believe the depth of relation to the conceptual base is appropriate in this case.  It is clear the authors are well-informed as to the state of current literature on the topics, and where this research fits in that landscape.

  1. In your view, what is the significance of the study, and what are its primary implications for theory, future research, and practice?

            The research shows interesting effects in manipulating instructions and feedback in eLearning environments.  These effects are manifest in knowledge outcomes and in motivation.  The results support previous literature, but the various measures are combined in a novel way that may prove useful to replicate in other environments – especially those with additionally game features.  For these reasons, I feel the research has value for practice in the field.

I have personally created several instruction integrations for use in eLearning environments.  These were not tested formally, but I did collect informal video of feedback from several users.  I am interested in further research that tests different instruction types on learning outcomes and on self-efficacy in varied learning environments.  So, I found the research personally relevant.

While the results were interesting and presented valuable implications for the field, there were several ways the article could be improved.  These might all be described as more clarity in presentation.  This is not to say the various sections, in particular Section 1.3 of the literature review and the various hypotheses, did not makes sense.  They were simply difficult for the reader to tease out and ascertain meaning from.  A clear list of research questions and hypotheses, linked to specific measures would do much to quickly clarify the goals of the research.  Likewise, in the literature review, Section 1.3 could be improved by changing the heading (much of the section did not pertain to it), grouping research in logical categories, and removing language that suggests a bias towards proving DGBL as effective.

As research on DGBL continues, it is important for researchers to present studies that will be seen as rigorous and contribute to the field of DGBL (Mayer, 2015).  This research does illuminate some of the implications of instructions and feedback on learning outcomes and motivation.  By adopting the value-added approach, and a well-researched theoretical base, the authors have contributed to progress in the field.


Antonenko, P. D. (2015). The instrumental value of conceptual frameworks in educational technology research. Educational Technology Research and Development, 63(1), 53–71.

Erhel, S., & Jamet, E. (2013). Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Computers & Education, 67, 156–167.

Mayer, R. E. (2015). On the need for research evidence to guide the design of computer games for learning. Educational Psychologist, 50(4), 349–353.

McMillan, J. H. (2012). Educational Research: Fundamentals for the Consumer. Pearson.

Randolph, J. J. (2007). Multidisciplinary Methods in Educational Technology Research and Development by Justus J. Randolph. HAMK University of Applied Sciences, Digital Learning Lab.