This week I took a look at two very similar articles. Both were studies of trends in educational technology research methodology and subject matter over the course of two starting points and ending in 2014. I read the article from Scientometrics first, which answered the majority of my questions, but also read the article from ETRD to get a sense of a similar study from that journal.
Baydas, O., Kucuk, S., Yilmaz, R. M., Aydemir, M., & Goktas, Y. (2015). Educational technology research trends from 2002 to 2014. Scientometrics, 105(1), 709–725.
Reeves, T. C., & Oh, E. G. (2017). The goals and methods of educational technology research over a quarter century (1989–2014). Educational Technology Research and Development, 65(2), 325–339. https://doi.org/10.1007/s11423-016-9474-1
Both of the articles above describe the research methods and subject matter in studies undertaken by researchers in the field of educational technology. They both draw samples from Educational Technology Research and Development (ETR&D), but the Scientometrics article also samples from the British Journal of Educational Technology (BJET). As noted in the titles, both samples end at 2014, and have significantly different starting points – 1989 and 2002. Both utilize differing methodology for conducting content analysis and also differ notably in the instruments used to classify the studies sampled. Discussion and review of the design of the various collection instruments was interesting in itself. In terms of research methodology, the studies adopted fairly similar frameworks for grouping study designs. Results in both studies reveal interesting trends, too numerous to discuss here, but there are some notable findings. For example, Baydas (2015) reports that studies concerning learning theories, environments and elearning were given priority. Interestingly, Reeves (2017) reports an overall drop in theory development goals in ETRD. This suggests that the two studies might be examined more closely to find differences in focus longitudinally and between the two journals generally.
Both studies were well-designed and offer clear results and meaningful analyses of trends, informed by researchers’ concerns for the future of the field as a whole. There were several areas of interest in the results, in terms of research topics and methods. Quantitative methods remain in the majority but numerous types of qualitative studies are also undertaken. It is interesting to see the dispersion of method type among these two categories – and that of mixed methods studies. These results alone would have made the studies valuable, but the discussion in both add context that was valuable to someone new to this field of inquiry. For example, Reeves discusses the fact that no critical or post-modern studies were published across the entire span studied.
These studies were an important starting point, a chance for me to get a grasp of the type of research being conducted in the field currently. Rationals for using specific methods with various subject matter is useful, as are the descriptions of varied classifications of subject matter. Importantly for me, the work of categorization has already been undertaken, and I can benefit from the work of others in creating data collection instruments. The authors of both studies did just that, adopting categories based on the theoretical frameworks of others. Most vital, the studies provide an opportunity to develop and digest ideas that I might have, and begin to imagine research designs.