Team Science Evaluations

Indicators for Measuring the Contributions of Individual Knowledge Brokers

Dr. Sabine Hoffmann, Eawag, Switzerland; Dr. Simon Maag, Parliamentary Services, Swiss Parliament; Dr. Robert Kase-Pasanen, University of Applied Sciences and Arts Northwestern Switzerland; Dr. Timothy J. Alexander, Eawag: Swiss Federal Institute of Aquatic Science and Technology
 

Environmental research often aims at achieving a broader impact on society and the environment. However, the impact of such research on policy and practice tends to fall short of expectations. This is partially due to a lack of productive exchange across the interface between research, policy and practice. Researchers are sometimes not sufficiently informed about the concerns of decision makers and hence produce knowledge that is barely relevant for them or poorly timed. On the other hand, decision makers are not always sufficiently aware of available research knowledge or its implications for both policy and practice.

Given these limitations, an increasing number of knowledge brokers work at the interface between research, policy and practice. Their specific function is to facilitate processes to foster productive exchange and mutual learning among research, policy and practice. The ultimate goal of such processes is to catalyze positive change in society and the environment. However, empirical evidence on the effectiveness of the various processes facilitated by knowledge brokers remains incomplete. While frameworks exist for assessing research impact, few are available for evaluating the contributions of individual knowledge brokers at the interface between research, policy and practice.

In this presentation, we respond to this gap by presenting a set of indicators to measure the quantity and quality of contributions of individual knowledge brokers to projects, programs, and platforms at the interface. We focus on indicators related to the processes facilitated by individual knowledge brokers (‘process indicators’) and indicators related to the process results on which knowledge brokers are likely to have a decisive influence (‘attributable results indicators’). For both indicators, we provide metrics as regards the quantity and quality of the contributions.

The set of indicators is based on two sources: the existing literature and the practical experience of a group of knowledge brokers organized as a Community of Practice at the Swiss Federal Institute of Aquatic Science and Technology (Eawag), including the co-authors of this presentation. The insights gained from these two sources are integrated, synthesized and refined in an iterative process. The set of indicators is primarily intended to support knowledge brokers in self-assessing their contributions. It can help them to (a) identify ways to improve the effectiveness of their daily work, (b) demonstrate the benefits of their work, (c) reflect on processes of knowledge brokering and the desirable characteristics of process results, and (d) to sharpen their professional profile and clarify their roles and responsibilities vis-à-vis their employers and other stakeholders. The set of indicators is flexible enough to be applied even where available resources are limited.

SciTS Presentation: Indicators for measuring the contributions of individual knowledge brokers

 

“Innovation Happens at the Intersections of Disciplines.” Transdisciplinary Research Outcomes Based on the Transdisciplinary Research on Energetics and Cancer (TREC) II Initiative Experience

Ms. Sarah D. Hohl, University of Washington, Fred Hutch Cancer Research Center; Dr. Sarah Knerr, University of Washington; Dr. Sarah Gehlert, University of South Carolina; Dr. Marian Neuhouser, Fred Hutchinson Cancer Research Center; Beti Thompson, Fred Hutchinson Cancer Research Center
 

Public health problems are influenced by multiple and interacting biologic, social, behavioral, and environmental factors. The dominant strategy to addressing these problems relies on monodisciplinary methods.  Dynamic research approaches in which transdisciplinary teams of scientists collaborate beyond traditional disciplinary, institutional, and geographic boundaries have emerged as promising strategies to address pressing public health priorities. Transdisciplinary research is conceptualized as yielding unique outcomes given its novel and collaborative nature where research teams develop and use new methods outside their immediate area of expertise. However, little prior work has attempted to identify and characterize the outcomes of transdisciplinary research undertaken to address societal issues. We used a multistage mixed methods framework to identify and explore outcomes of transdisciplinary research using the Transdisciplinary Research in Energetics and Cancer (TREC) II initiative as a case example. A survey of TREC II investigators and trainees identified nine initial transdisciplinary outcomes that were further refined using interviews and focus groups. The final transdisciplinary research outcomes, whose relevance to addressing complex societal problems we describe using the TREC II experience, included: 1) new transdisciplinary team and consortia formation; 2) integrated theoretical framework development; 3) multi-level intervention model development and testing; 4) development and adaptation of relevant statistical models; 5) translation of findings across levels of influence; 6) public policy influence; 7) transdisciplinary manuscript publication; 8) transdisciplinary grant awards; and 9) training the next generation of transdisciplinary researchers. Although the outcomes identified were similar to those expected from non-transdisciplinary approaches, they are distinguished by their involvement of team members representing diverse disciplines, reliance on integrated theoretical frameworks, and a social-problem-oriented focus. These transdisciplinary outcomes could guide inquiry about the value added to research by using a transdisciplinary approach.

 

A New Methodology for Evaluating Research Integration

Ms. Bethany Laursen, Michigan State University; Dr. Nicole Motzer, National Socio-Environmental Synthesis Center (SESYNC)
 

A defining feature of interdisciplinary research (IDR) is that it integrates disparate disciplinary contributions into new IDR insights (National Research Council, 2005). As such, “integration is widely regarded as the primary methodology of interdisciplinarity” (Klein, 2012) and, by extension, interdisciplinary team science. Yet, evaluating integrative team science processes and outcomes in transparent, comparable, and reproducible ways remains elusive. We therefore present a new methodology for evaluating the nature and extent of integration in research products such as peer-reviewed articles.

Gathering and evaluating evidence of integration is difficult. Our literature review revealed no fine-grained methodologies for evaluating integration in research products, and also that related evaluations have primarily relied upon expert judgment to assess integration. However, expert judgment is (1) not a transparent method and therefore does not increase understanding, (2) makes cross-case or cross-product comparisons difficult, and (3) not reproducible as it depends on the life experiences of the experts. Thus, while valuable, expert judgment is not sufficient for evaluating integration.

Through a collaboration between Michigan State University and the National Socio-Environmental Synthesis Center (SESYNC) - catalyzed by MSU’s Engaged Philosophy Internship Program (itself a new form of team science) - we developed and piloted an innovative methodology to fill this need. This is a mixed methods evaluation that falls under the general umbrella of “discourse analysis.” The main methods deployed within this approach are argument analysis and integration analysis, with both methods reflecting quantitative and qualitative aspects. Together, these methods contribute to an overall “Synthesis Signature” for a given research product. The Synthesis Signature descriptively and numerically represents, respectively, in what ways and to what extent a given research product is integrative/interdisciplinary/synthetic.

The keystone of our methodology is the IPO model of integration developed by O’Rourke and colleagues (2016), demonstrated in team discourse at SciTS 2017 by Laursen and O’Rourke. The IPO model theorizes integration as a generic input-process-output (IPO) activity, in which the number of outputs is fewer than the number of inputs. The process transforming inputs-to-outputs is known as an integrative relation. We have adopted this IPO model to guide integration evaluation. In doing so, we become the first scholars to operationalize this framework not only for evaluative purposes, but for any purpose beyond theory.

We piloted the methodology on five research articles supported by SESYNC, whose primary goal is to address pressing socio-environmental challenges through interdisciplinary, team-based, socio-environmental (S-E) synthesis research. Pilot results help improve understanding and decision-making for not only SESYNC but for other research centers and funders similarly engaged with and committed to S-E synthesis.

References:

Klein, J. T. (2012). Research Integration: A Comparative Knowledge Base. In A. F. Repko, W. Newell, & R. Szostak (Eds.), Case Studies in Interdisciplinary Research (pp. 283–298). Thousand Oaks, CA: SAGE. http://doi.org/10.4135/9781483349541

National Research Council. (2005). Facilitating Interdisciplinary Research. Washington, D.C.: National Academies Press.

O'Rourke, M., Crowley, S., & Gonnerman, C. (2016). On the nature of cross-disciplinary integration: A philosophical framework. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 56, 62–70. http://doi.org/10.1016/j.shpsc.2015.10.003

 

Evaluation of Centers and Institutes: Developing a Framework from Complex Systems and Team Science

Dr. Gwen C. Marchand, University of Nevada Las Vegas; Dr. Jonathan C. Hilpert, Georgia Southern University
 

There is a gap between research infrastructure goals of large scale centers and institutes and traditional evaluation efforts and reporting requirements. Although there has been a shift in funding from individuals to large scale collaboratives, there has not been an equivalent shift in evaluation efforts focused on collaborative center/institute functioning. The science of team science (SciTS), and related complex systems approaches to research, can fill this gap by providing direction for evaluation methods and outcomes/ indictors that address the evaluation needs of centers and institutes. The purpose of this presentation is to provide our evaluation framework and illustrative examples of its use for the evaluation of centers and institutes.

Research collaboratives, such as those represented by funded centers and institutes, are complex systems (Borner et al., 2010). They are characterized by behavior that is (a) complex – behavior at macro levels of the system that is not reducible (Gell-Mann & Lloyd, 1996; Mitchell, 2009); (b) dynamic – microprocesses amongst systems components that change over time (Koopmans, 2015); and (c) emergent - microinteractions of system components give rise to novel macrosystem behavior (Holland, 2006). Research teams range in complexity based on tasks, goals, size, proximity, and diversity (Fiore, 2015). Evaluation approaches must be multi-level and mixed-methods to adequately represent the ecology of team science (Fiore, 2015) and specifically, relationships that are bound by context and discipline (Borner et al. 2010).

Evaluation of team science and collaborative knowledge production, underscored by the use of mixed methods approaches to network analysis, has begun to emerge as a promising avenue for evaluation of federally supported centers/institutes (Hilpert & Marchand, 2017; Luke et al., 2015; Marchand & Hilpert, 2018). SciTS approaches (e.g. Fiore, 2012) allow evaluators to produce evidence regarding the development of research infrastructure, collaborative scholarly productivity, and the development of shared vision around a culture of research. Team science outcomes are often more meaningful in the context of a multidisciplinary center/institute than traditional evaluation approaches because they provide multiple forms of evidence for the cohesion of collaborative activity over the trajectory of a research collaborative (Luke et al., 2015).

Evaluation efforts focused on the development of team science outcomes provides the best evidence for productive ways forward for research collaboratives. We share an evaluation framework organized around a comprehensive set of team science outcome objectives that can be used to design evaluation data collection and analyses. The framework allows for the integration of outcome objectives and the production of knowledge around scientific advancement. The objectives are to provide evidence of the following outcomes:   

  1. Collaborative scholarly productivity
  2. Planned and emergent research infrastructure
  3. Shared vision and culture of research
  4. Mentoring and advancement of scientists 
  5. Leveraging of resources to promote growth

Taken together, these outcomes can provide information that goes beyond traditional forms of evidence about the cohesion of a research collaborative. The evidence can be used to develop grant proposals, provide evidence for annual reporting, and to formulate resource requests.

SciTS Presentation: Evaluation of Centers and Institutes: Developing a Framework from Complex Systems and Team Science

 

Evaluating Transdisciplinary, Sustainability-Focused Higher Education Programs: Using Transdisciplinary Orientation as a Performance Measure in Three NSF-Funded Program Contexts

Dr. Shirley Vincent, Vincent Evaluation Consulting; Dr. Deana Pennington, University of Texas at El Paso; Dr. Robert Chen, University of Massachusetts at Boston; Dr. Alan Berkowitz, Cary Institute of Ecosystem Studies; Dr. Aude Lochet, Cary Institute of Ecosystem Studies
 

Evaluation of transdisciplinary, sustainability-focused higher education programs is a nascent and evolving field. The National Science Foundation (NSF) has funded interdisciplinary education programs for many years, but best practices for evaluating the performance of these programs have not been determined. This presentation will focus on the use of the Transdisciplinary Orientation (TDO) Scale (Misra et al. 2015) as a performance measure in three NSF funded program contexts. TDO is defined as the values, attitudes, beliefs, conceptual skills and knowledge, and behavioral characteristics important for effective collaboration in interdisciplinary teams. Scholars reporting higher levels of TDO produced scientific outputs judged to be more transdisciplinary in nature and to have greater translational, policy and practical relevance. Since experience in transdisciplinary research is positively and significantly correlated with higher levels of TDO, we predicted that training in convergence learning and transdisciplinary problem-solving will result in development of higher levels. 

There are two dimensions of TDO: The Values, Attitudes and Beliefs Dimension (VAB) and the Conceptual Skills and Behaviors (CSB) dimension. The VAB dimension is associated with an intellectual and personal orientation towards interdisciplinary research and includes valuing collaboration and understanding the importance of including diverse disciplinary perspectives in solving complex problems. The CSB dimension is associated with the conceptual skills and behaviors required for effective interdisciplinary integration of multiple disciplinary perspectives and methods.

TDO scores are obtained using a validated 12-item scale. We documented TDO scores pre- and post- program in three program contexts: a 9-week summer undergraduate research experience delivered by the Urban Water Innovation Network: Transitioning Toward Sustainable Urban Water Systems program (NSF SRN), an intensive 2-week doctoral student training summer workshop offered through the Collaborative Research: Employing Model-Based Reasoning in Environmental Science program (NSF NRT-IGE), and a 2-year interdisciplinary graduate student training program provided by the Coasts and Communities: Natural and Human Systems in Urbanizing Environments program (NSF IGERT) at the University of Massachusetts at Boston. The first two programs recruited students from diverse higher education institutions.

Four cohorts in two of the three programs (30 UG and 25 PhD students) had consistently significantly higher levels of TDO after their participation in the education program (paired t-tests). Undergraduate increases were tilted toward more change in the VAB dimension and PhD students toward the CSB dimension. Results were mixed in one cohort in the 2-year graduate training program with some students reporting higher TDO and a few lower levels (7 GR students). A second cohort has completed the scale pre-program and will complete post-program in May. An additional NSF-funded graduate education program (NSF NRT) will also be using the TDO scale as a performance measure. Differences in the TDO scores for UG and GR students and in programs that may influence the development and student reporting of higher TDO levels will be discussed. TDO is only one measure of program performance with TDO results included in an array of evaluation measures, but it shows promise as a means for evaluating programs designed to develop convergence learning and transdisciplinary research competencies.