|Year : 2023 | Volume
| Issue : 1 | Page : 47-57
Competency Assessment of Non-Specialists Delivering a Psychological Intervention in Lebanon: A Process Evaluation
Rozane El Masri1, Frederik Steen2, April R Coetzee3, May Aoun4, Brandon A Kohrt5, Alison Schafer6, Gloria A Pedersen7, Rabih El Chammay8, Mark J.D Jordans9, Gabriela V Koppenol-Gonzalez10
1 Research Coordinator, War Child Lebanon, Beirut, Lebanon
2 Researcher, Research and Development Department, War Child, Amsterdam, The Netherlands
3 Researcher, Research and Development Department, War Child, Amsterdam, Amsterdam Institute of Social Science Research, University of Amsterdam, Amsterdam, The Netherlands
4 MHPSS Regional Advisor, War Child Lebanon, Beirut, Lebanon
5 Professor, Department of Psychiatry and Behavioral Sciences, Director, Center for Global Mental Health Equity George Washington University, Washington, USA
6 Technical Officer, World Health Organization, Geneva, Switzerland
7 Senior Research Associate, Department of Psychiatry and Behavioral Sciences, George Washington University, Washington, USA
8 Head of the National Mental Health Programme, Ministry of Public Health, Beirut, Lebanon
9 Professor, Amsterdam Institute of Social Science Research, University of Amsterdam, Amsterdam, Director of Research and Development Department, War Child, Amsterdam, The Netherlands
10 Senior Researcher, Research and Development Department, War Child, Amsterdam, The Netherlands
|Date of Submission||08-Sep-2022|
|Date of Decision||04-Mar-2023|
|Date of Acceptance||09-Mar-2023|
|Date of Web Publication||27-Apr-2023|
MPH Rozane El Masri
Research and Development Department, War Child Holland, Verdun, Hussein Oweini Street, Beirut 1103
Source of Support: None, Conflict of Interest: None
There is an increasing need to improve the competency and quality of non-specialists delivering psychological interventions. As part of the Ensuring Quality in Psychological Support (EQUIP) initiative, this study evaluates the process of roleplay-based competency assessments using three tools to assess the competencies of facilitators delivering a psychological intervention for children in Lebanon. With a group of five competency raters, five facilitators and four actors, this study uses a mixed methods approach, comprising competency assessment data, qualitative interviews and focus group discussions. Data were collected during a two-phase process. Findings of the study showed inter-rater agreement was generally acceptable after additional training of raters. Eventually, it is feasible to prepare actors, facilitators and raters on roleplays for effective implementation of competency-driven training. As for the non-specialists, it was found that overall the experience of taking part in competency assessments was useful to understand their points of improvement. Pre- to post-training improvements in competencies showed that despite reported feelings of anxiety, the facilitators benefited from the feedback given on their competencies. We concluded that using roleplay-based competency assessments and preparing for competency-based training is feasible and useful to ensure quality control in mental health and psychosocial support (MHPSS) service provision.
Keywords: actors, competency assessments, competency raters, competency tools, MHPSS training, non-specialists, roleplays, supervision, training
|How to cite this article:|
El Masri R, Steen F, Coetzee AR, Aoun M, Kohrt BA, Schafer A, Pedersen GA, El Chammay R, Jordans MJ, Koppenol-Gonzalez GV. Competency Assessment of Non-Specialists Delivering a Psychological Intervention in Lebanon: A Process Evaluation. Intervention 2023;21:47-57
|How to cite this URL:|
El Masri R, Steen F, Coetzee AR, Aoun M, Kohrt BA, Schafer A, Pedersen GA, El Chammay R, Jordans MJ, Koppenol-Gonzalez GV. Competency Assessment of Non-Specialists Delivering a Psychological Intervention in Lebanon: A Process Evaluation. Intervention [serial online] 2023 [cited 2023 May 29];21:47-57. Available from: http://www.interventionjournal.org//text.asp?2023/21/1/47/375054
| Key implications for practice|| |
- Competency-driven training can be feasibly implemented in humanitarian settings to improve the quality of training and skills achieved by non-specialists to deliver psychological interventions for children and adolescents.
- Skills needed for competency-driven approaches include training in conducting standardised role plays and using structured observational assessment tools.
- Sufficient explanation of the purpose of competency-driven training should be provided to trainees to reduce their anxiety about being assessed and optimise the benefits of this approach.
| Introduction|| |
The delivery of psychological interventions by non-specialists (i.e. facilitators without formal mental health training) through task-sharing is increasingly occurring in low- and middle-income countries (LMIC) to address mental health and psychosocial support (MHPSS) needs in areas that lack specialists (Hoeft et al., 2018; Patel et al., 2012). However, there remains a need to ensure quality control in care provision, including competency assessment of non-specialists (Jordans & Kohrt, 2020; Singla et al., 2017). Unlike associations that assess the competencies of MHPSS professionals, there is a lack of standardised systems and mechanisms for checking whether non‐specialists achieve a minimum level of competency to support effective and safe delivery of care (Kohrt et al., 2020).
Recently, the World Health Organization (WHO) and UNICEF launched the Ensuring Quality in Psychological Support (EQUIP) initiative (https://equipcompetency.org). EQUIP is an online, open-access platform that supports competency-driven approaches with freely accessible competency assessment tools and related guidance in the training and supervision of non-specialist facilitators of psychological interventions (Kohrt et al., 2020). The development of the EQUIP platform involved research and pilot testing on the feasibility, acceptability and usefulness of the competency assessment tools, trainings and the digital platform. An EQUIP research consortium supported this work with studies conducted in different countries including Kenya, Ethiopia, Lebanon, Nepal, Peru, Uganda and Zambia. This study focuses on a process evaluation conducted in Lebanon and aiming to contribute to the EQUIP initiative.
Fairburn and Cooper describe competence as the extent to which someone “has the knowledge and skill required to deliver a treatment to the standard needed for it to achieve its expected effects” (2011, p. 373). Competency is described as “the observable ability of a person, integrating knowledge, skills, and attitudes in their performance of tasks. Competencies are durable, trainable and, through the expression of behaviours, measurable” (Mills, 2020, p. 12). A valuable way to observe behaviours and assess competency is via standardised roleplays, where a person can show their skills prior to real-world delivery by facilitating a session in a controlled setting with a simulated client (Ottman & Kohrt, 2020). This is the approach applied on the EQUIP platform. This is in line with Fairburn and Cooper’s (2011) recommendations for roleplay-based methods of assessing competencies and George Miller’s framework (1990) for clinical competency, which differentiates between what a person knows (knowledge), knows how (application), shows how (competency) and does (core quality). The level of competency (shows how) can be assessed by raters observing roleplays with simulated intervention participants (Jordans et al., 2021; Kohrt & Jordans et al., 2015; Pedersen et al., 2021).
The following competency assessment tools covering foundational helping skill (FHS) competencies, also known as common factors, essential for effective delivery of care such as communication, empathy and assessment of harmful situations have been included in the EQUIP platform; the ENhancing Assessment of Common Therapeutic factors (ENACT) for facilitators working with adults (Kohrt & Jordans et al., 2015; Watts et al., 2021); the Working with children-Assessment of Competencies Tool (WeACT) specific to working with children (Jordans et al., 2021); and the Group facilitation-Assessment of Competencies Tool (GroupACT) covering key competencies unique to delivery of group-based programmes and typically used alongside ENACT or WeACT competencies (Pedersen et al., 2021). Additionally to the development of these tools, it is important to understand how to execute and operationalise competency assessments using the combination of tools and roleplays. A scoping review revealed that the use of structured roleplays with simulated clients needs to consider time and resources when applied in low resource settings (Ottman et al., 2020). Equally important is having a clear description of how competency assessment processes can be flexible and adaptive to different contexts (Pedersen et al., 2021). This means that there should be clear guidance on how to adapt the competency assessment and roleplay materials and how to use them in facilitators’ trainings in different contexts.
With the aim to improve the EQUIP platform with real-world use experience, we describe the process of competency assessments of non-specialist facilitators using structured roleplays. Specifically, this study comprises two phases. In the first phase (the preparation phase) we focus on the experiences, challenges and best practices of the competency raters who were trained to use the assessment tools and the actors who were trained to act out the roleplays. In the second phase (the implementation phase) we focus on the experiences, challenges and best practices of the raters using the assessment tools, the actors acting in the roleplays, and the facilitators trained and assessed on their competencies by means of roleplays. This study is thus attempting to: (1) understand the process of preparing raters to assess competencies and reach sufficient inter-rater reliability, (2) understand the process of preparing actors to perform standardised roleplays for the competency assessments, and (3) explore the experiences and outcomes of non-specialised facilitators who participated in the standardised roleplays approach.
| Methods|| |
The current study was conducted in Lebanon, a country dealing with the consequences of a protracted refugee crisis in addition to a recent economic collapse starting in October 2019. The study was conducted in Beirut between October 2019 and March 2020. During this time, activities were halted several times due to security issues and COVID-19 lockdown measures.
This process of evaluation was applied to a psychological intervention called Early Adolescent Skills for Emotions (EASE) (Dawson et al., 2019). The EASE intervention training was turned into a competency-driven version of the intervention. EASE is a transdiagnostic intervention intended to be delivered by non-specialist facilitators as a group-based intervention for adolescents aged 10 to 14 years. The intervention comprises seven 90-minute sessions that offer adolescents the skills needed to enhance their psychological coping. In addition, three sessions for caregivers are provided, based on the evidence that caregiver sessions can improve the outcomes of psychological interventions for their children (Brown et al., 2019).
Participants in this study included raters, actors and facilitators. In the context of the EASE intervention, “raters” are the individuals who were trained in observing and assessing the competencies of the facilitators; “actors” are those individuals who were simulating children or caregivers during the standardised roleplays; and “facilitators” are the non-specialists who were trained to deliver the intervention. All participants provided written consent and were aged 18 and older. Five raters (Age M = 25.80, SD = 3.19, range = 21–28; years of experience in MHPSS M = 4.70, SD = 2.33, range = 1.50–7) were selected based on their motivation and relevant experience in MHPSS intervention delivery, using an open call for participation. Four actors (Age M = 25.50, SD = 2.69, range = 23–30) performed different roles for the facilitator assessments, including the role of facilitator in standardised roleplays and the role of intervention participants in live roleplays. The actors were recruited from a local theatre group and selected based on their acting skills and experience. Each had more than 3 years of acting experience. Five facilitators (Age, M = 22.40, SD = 1.14 range = 21–24; years of experience working with children M = 2.03 SD = 1.35) were trained and assessed on their competencies. The facilitators did not have a background in social sciences and had no previous experience in facilitating MHPSS interventions.
This study is a mixed-methods process evaluation as part of a larger, three-phase model where the results of each phase inform the subsequent one. Phase 1 included the adaptation of the competency assessment tools to the EASE intervention and to the Lebanese context, and the raters’ and actors’ preparation. Phase 2 concentrated on the delivery of the competency-driven training (CDT) of EASE to a group of facilitators, accompanied by pre- and post-training assessments using standardised roleplays. The CDT training was developed and delivered by the CDT trainer who is a senior MHPSS specialist at the implementing organisation with more than 8 years experience of working in MHPSS programmes. The CDT trainer is experienced in delivering the EASE intervention and has studied the competency assessment tools so as to tailor the competency-driven version of the EASE intervention training. This was done by using the pre-training competency assessment scores of the facilitators to inform the delivery order and the dosage of the regular training content, while keeping all the building blocks present and using feedback linked to competency during training practices (Jordans et al., 2022). Details of how the EASE intervention training was adapted to and implemented as a competency-driven training (CDT) will be reported in another paper (Aoun et al., in preparation). Using the learnings from the first two phases, Phase 3 included the implementation of the CDT versus the training-as-usual of the EASE intervention with pre- and post-training assessment of facilitators’ competencies (see Jordans et al., 2022). The current process evaluation concerns Phases 1 and 2 and addresses the aims by (1) adapting the competency assessment tools, delivering and evaluating the competency raters training and assessing the inter-rater reliability at multiple time points, (2) delivering and evaluating actors’ training on standardised roleplay and jointly developing the standardised roleplay videos, and (3) interviewing and observing the non-specialised facilitators taking part in the standardised roleplays approach for competency assessments.
Ethical approval for this study was obtained from Saint Joseph University in Beirut, approval number CEHDF_1490, GWU Institutional Review Board Office, approval number IRB# NCR191797, and WHO Research Ethics Review Committee, approval number ERC.0003192.
Guides for the key informant interviews (KIIs), cognitive interviews (CIs) and focus group discussions (FGDs) were administered to the participants. The guides were originally developed by study partners and adapted by the country team to be culturally relevant to the Lebanese context and the EASE intervention. The guides were also translated to Arabic, in which they were administered.
Competency Assessment Tools
The ENACT was developed for the assessment of common competencies across psychological treatments to rate facilitators’ competencies and detect potential harm when working with adults (Kohrt & Jordans et al., 2015; Kohrt & Ramaiya et al., 2015). The ENACT version used in this study consists of 15 items. Built on the ENACT, the WeACT was developed to assess competencies of service providers working with children and adolescents in interventions across MHPSS, child protection and education sectors (Jordans et al., 2021). The WeACT version used in this study included 13 items. The third tool, GroupACT, assessed group facilitation competencies for delivery of MHPSS services (Pedersen et al., 2021). The original GroupACT included eight items, which were adapted to six items for this study (details in the Procedure section).
All three tools aim to assess facilitators’ competencies and detect potentially harmful behaviours (Jordans et al., 2021). All use a Likert-type scale with four levels to assess their competency: Level 1 indicating potential harm, Level 2 indicating absence of sufficient competency, Level 3 indicating competency and Level 4 showing advanced level or mastery of the competency. Items for each of the three tools are listed in [Table 1]. For the full list of items and scoring of the GroupACT and ENACT we refer to the EQUIP platform (https://equipcompetency.org). For the WeACT, see Appendix 1.
The two phases of the current study and the steps taken in each phase are illustrated in [Figure 1].
Phase 1: Preparation of the Competency Assessment
First, previous versions of the ENACT, WeACT and GroupACT were translated and adapted to the Lebanese context and the EASE intervention in several workshops involving seven collaborators from the research team and a group of MHPSS trainers and supervisors. Second, the actors were trained to act as facilitators for competency assessment in standardised roleplays. These roleplays were video-taped and used in the raters’ training. Based on the competencies shown by the actors acting as facilitators, the scores from the raters were used to calculate the inter-rater reliability (IRR) of the assessment tools. Three videos, made with the same actors (videos 1, 2 and 3 in [Figure 2]), were used for IRR testing throughout the raters’ training with the aim to give specific guidance and improve the tools where needed, that is when agreement was insufficient. The first competency raters’ training (Training A in [Figure 2]) was given over 3 consecutive days. The raters’ trainings A and B were skill-based and were designed in a way to spur an improvement in raters’ abilities and skills to observe and rate the video roleplays using the competency tools. The training included presentations of the competencies, discussions in plenary on examples of the competencies and every behaviour and how it can be operationalised in the context of working with children and their caregivers. The training also included practising rating using single and multi-competency roleplays. Following these first two rater trainings, a decision was taken by the EQUIP consortium to change the tools format from level into check-box format (see [Figure 3]). Level format means there were only paragraph descriptions of every level. Check-box format included attributes, which are the observed behaviours operationalising each of the four levels for each competency item. Each attribute is scored 0 (not present) or 1 (present). After this format change, during training C, the items that still showed low agreement between the raters were discussed and further training on the attribute format of the tools was given before the final Inter-Rater Reliabilities (IRRs) were calculated.
|Figure 2 Process of the Raters Trainings and Inter-Rater Reliability Testing|
Click here to view
|Figure 3 Example of Items Changing from Level Format to Level and Attributes Format|
Click here to view
Finally, the actors were trained to act as intervention participants, that is, children and caregivers, in standardised roleplays. These roleplays were performed live with the trained facilitators, so the facilitators’ competencies could be assessed by the raters in the next phase.
Phase 2: Implementation of the Competency Assessment
This phase comprises the use of the assessment tools by the raters observing the live roleplays and assessing the facilitators’ competencies. The implementation included a pre-training competency assessment, the delivery of the CDT, a post-training competency assessment and a competency assessment after implementation and supervision of the EASE intervention (see, [Figure 1]). However, we do not report on the post-implementation assessment, as activities were halted due to COVID-19 restrictions and not finalised.
The trained competency raters assessed each of the competencies in real-time, while facilitators performed the intervention with the actors who role-played the participants. In total, each of the facilitators underwent four different standardised roleplay sessions with the actors; once with a group of “children” to assess WeACT items (1–4, 6–8, 12 and 13) plus all six GroupACT items; once with one “child” to assess WeACT items (5, 9–11); one group session with “caregivers” to assess ENACT items (1–6, 9, 13–15), and; one individual session with one “caregiver” for ENACT items (7, 8, 10–12). Information about the context and instructions on their role in the roleplays were shared with the facilitators 15 minutes before the pre-training assessment and 1 day before the post-training assessment. All facilitators’ assessments were videotaped with consent, only for reference by the research team.
All the qualitative data were transcribed verbatim and coded in NVivo 12 (released in March 2018) using inductive and deductive approaches to establish a codebook that pre-defined themes, which were analysed using a framework analysis approach (Gale et al., 2013). For the quantitative analysis, we assessed the IRRs as illustrated in [Figure 2]. The IRR based on the competency levels (1–4) was calculated with an intra-class correlation (two-way mixed effects model, single measures, absolute agreement) (Koo & Li, 2015). The IRR based on the attributes (yes/no) was calculated using Krippendorff’s alpha (Hayes & Krippendorff, 2007). For the analysis of the live competency assessments we assigned a so-called designated rater who was most in agreement with the intended scoring during the video roleplay assessment. Lastly, to assess change over time in the scores of the five facilitators from pre- to post-training, we performed Wilcoxon sign rank tests at different levels; on the mean total scores of the competencies and split into dichotomous scores representing no competency (levels 1 and 2) and competency (levels 3 and 4). The Wilcoxon sign rank test is a non-parametric test for differences in scores of two paired samples, in our case pre to post changes. We used this non-parametric test because of our small sample size.
| Results|| |
In phase 1 and 2, a total of four FGDs were conducted with actors and raters (one with each group at the end of the preparation and implementation phases); 11 KIIs with facilitators, CDT trainer, actors’ trainer and PSS trainers who attended the raters training; 14 process notes describing the process of training raters, facilitators and competency assessments; and five CIs with raters.
Phase 1: Preparation of the Competency Assessment
The adaptation workshops resulted in changes in the tools, making them relevant to the EASE intervention in both the English and the Arabic translated version. For example, words such as “therapist” were replaced by “facilitator”, as EASE is a group-based intervention delivered by at least two non-specialised facilitators. This step benefited from the presence of a group of technical MHPSS team members who are familiar with MHPSS vocabulary in both languages. Further adaptations were made to make the tools more relevant and suitable for work with children and caregivers which are the target groups of the EASE intervention: GroupACT items 1 and 2 were merged and item 3 “Group Participation” from the original GroupACT tool was excluded due to an overlap with WeACT item 7 “Ensuring children’s meaningful participation”. WeACT item 12 “Demonstrates collaboration with caregivers and other actors” was excluded, because of the overlap with the use of the ENACT tool. These adaptations resulted in the final items shown in [Table 1].
Actors’ and Raters’ Perceptions
When playing the role of facilitators in the standardised roleplay videos, the actors experienced difficulty acting out fluctuations in competency levels for the different items, such as performing a harmful behaviour (Level 1) for verbal communication, and then immediately performing a helpful behaviour (Level 3) for confidentiality. Actors reported that these fluctuations in competency levels made the character unbelievable. Also, actors found the scripts superficial, lacking the depth they were familiar with in the theatre. This resulted in the actors not always adhering to the scripts when preparing the scenes. Practice on the roleplays and scripts, however, in addition to having a clear idea of the purpose of the roleplays they were acting, was perceived as one of the best preparations for the actors.
- Actor 2, FGD Phase 2: “More practice because with any script you get, the more you practice the more you explain things, so through reading and through reviewing we understood better the [competency] level”.
With regard to the preparation of the raters, the challenges were the time burden to understand and observe the competencies of the three tools taken together, the amount of information to retain during assessment and the limited experience of some in observing and rating, which benefited from additional training. Also, raters experienced stress because they felt they were being tested when trying to attain sufficient agreement for the IRRs. Another challenge was related to rating the roleplay videos. In some cases, raters found it challenging to rate the roleplays in the videos when there was not a very noticeable or serious problem of competency. All the raters indicated that the observations of the live roleplays with the real facilitators were easier to assess than the videos, because in the latter many details (e.g. showing the children rather than the facilitator at times, using strong sound effects) were considered distracting, making it unclear which competency was being played out.
- Rater 1, Age 28: “The other time when we rated the video it was a bit more difficult. Because we were focusing a lot on what’s happening inside the video and there was lots of competencies we needed to pass through, and, umm, there was lots of things that were similar. I remember I had some confusion whether this competency or that.”
Additionally, in the videos it sometimes confused raters that the actor playing the facilitator showed very different levels of related competencies. This happened, for example, when an actor broke confidentiality (showing a Level 1 harmful behaviour in confidentiality) but did it empathically (showing a Level 3 helpful behaviour in empathy). Several practices were considered crucial in preparing raters. Among the most stated was practising the ratings through live and video competency assessments to ensure understanding of the competencies, such as the difference between what a helpful or unhelpful behaviour looks like, or a Level 1 versus a Level 3 for the same competency item. This was considered more beneficial than reading and discussing the tools.
- Rater 2, FGD Phase 1: “… always to have a quick roleplay ready on every competency, I don’t know if it makes sense, directly having something practical visual to see what is happening so that we are able to learn, ‘this is what she meant in 1 [Level1]’ okay, because as much as the person explains, it is different from seeing what words he used in acting”.
Raters’ Training and IRR Results
[Table 2] shows the results of the IRRs according to the process illustrated in [Figure 2]. For the interpretation we use Koo and Li (2015); below 0.5 is unacceptable, between 0.5 and 0.75 is moderate and between 0.75 and 0.9 is good and above 0.9 is excellent. After the first training (training A), the IRR1 results were unacceptable and therefore another training was offered (training B). The results of IRR2 were considered moderate for GroupACT and ENACT, and good for WEACT (Koo & Li, 2015). After training B, the tool format was changed from only levels to attributes and levels format. The raters made many recommendations to change the format of the tool in Training A and B to make them more visually appealing, which coincided with the addition of the attributes. Training C was given to train the raters on the use of the revised attribute format. Four consecutive IRR tests were conducted after that training; two immediately after the training on both competency levels (IRR 3a) and attributes (IRR 3b), and two repeated trainings approximately 1 week later on both competency levels (IRR 4a) and attributes (IRR 4b). The results on the competency level immediately after the training were considered insufficient for WeACT and ENACT but improved to acceptable values 2 weeks later (see [Table 2]). However, the IRR for GroupACT was acceptable immediately after the training and decreased 2 weeks later. With respect to the attributes, the IRRs immediately after the training were all lower compared to the competency levels, but WeACT and ENACT increased 2 weeks later. Similarly, IRR on the competency levels, GroupACT decreased 2 weeks later.
|Table 2 Inter-Rater Reliabilities After Each Training Phase for the Three Tools|
Click here to view
Phase 2: Implementation of the Competency Assessment
Among the challenges faced by the actors were remembering the purpose of the live roleplays. It was hard to get the actors to focus on the competencies as they often focused more on the artistic side of acting. This was attributed to the time burden associated with training actors on the roleplay assessments.
- Actors’ trainer: “So we have brought professional actors, and we didn’t have time unfortunately … I explained to them a bit what psychosocial support means, what is the role of the facilitator, I would have preferred that I give a longer snapshot but the time was not helping”.
The actors wanted to have studied the character and their story in greater depth to improve their performance. They felt their acting quality would improve if the scenarios and characters were more detailed and fleshed out. This led to them adding more details themselves and going off script. Raters, on the other hand, could see the inconsistencies in acting across time points and noted that some competencies could not be assessed in all the scenarios. They thought the goal would be better met if the actors always kept in mind the key points that needed to be rated and stuck to the script as written.
- Rater 2, Age 28: “Now there are a lot of things for example that I remember that when Actor 2 forgot a sentence, there was a competency that they didn’t come across, so we were forced to grade the same way “none of the above” several times; these things happened because there were a lot of things that they [the actors] didn’t show up in the scenario”.
The actors pointed out that it was emotionally difficult to play the role of a person with suicidal thoughts which was apparent in their body language. The actors’ trainer also mentioned that actors were not able to accurately portray persons living with disabilities.
With respect to the facilitators, the main challenges were the artificial setting, the intimidating nature of being assessed on their competencies, and lack of clarity of how competency was evaluated during roleplays. Some facilitators suggested the roleplays felt artificial in the sense that actors acted in non-realistic ways. The roleplays also felt unnatural and uncomfortable to some facilitators because they were being observed, rated and filmed, which made them feel anxious.
- Facilitator 4: “You know that there is someone who is shooting/videoing you, and you know that...a lot of things that makes you a little bit stressed”.
Pre- to Post-Training Changes in Competencies
[Table 3] gives the pre- to post-training changes in competency scores given in percentages. The largest differences were in the decrease of level 1 scores (potential harm) and level 2 scores (absence of sufficient competency). The percentages of level 3 scores (competency) increased from pre- to post-training for the three tools, as did the percentages of level4 scores (advanced or mastery).
The Wilcoxon sign rank testing of the differences in mean total scores from pre- to post-training showed a significant increase in the competencies for WeACT (z = −2.03, p = 0.04) and GroupACT (z = −2.06, p = 0.04), but not the ENACT (z = −1.46, p = 0.14). In terms of changes from non-competent scores (Level 1 and 2) to competent scores (Level 3 and 4) from pre- to post-training, there was a significant positive shift in the WeACT (z = −2.04, p = 0.04). As for the GroupACT and ENACT, there was also a positive shift from non-competent to competent scores, but it did not reach significance (GroupACT z = −1.84, p = 0.06 and ENACT z = −1.75, p = 0.08).
Despite the anxiety of facilitators during assessment, most facilitators appreciated getting feedback on their competency scores. Facilitators also liked that they could see how they had improved over time, because the scores provided them concrete indicators to see their improvements.
- Facilitator 1, Age 25: “I was happy when I was seeing my scores in detail. First I felt afraid...what if I am not good enough? I might make mistakes…but the whole concept was, wow, if I got evaluated means I will improve faster.”
Others, however, were disappointed about their scores. One facilitator mentioned it was uncomfortable to be observed and recorded, and that this might be why they received low scores in the competency assessment. Yet despite disappointment, the CDT trainer and facilitators expressed how much they appreciated the feedback process.
- CDT Trainer: “They were really appreciating it, they were feeling it. It was obvious that the pretest surprised them a bit. … But during training the fact that they were given feedback and improving was very positive”.
| Discussion|| |
Regarding the preparation phase, the findings revealed first of all the importance of having a team of native speakers with a MHPSS background who are familiar with the items and concepts of the tools in order to adapt them to the context. The adaptation and the contextualization of the role plays allow to elicit real-world scenarios in a controlled setting, which helps in pulling out the knowledge, skills and attitudes that determine the competencies of the facilitator.
Second, to ensure proper preparation of actors and raters on competency assessment it is advisable that sufficient time is given for them to become familiar with the context and tools. Third, a variation between the levels of the competencies in the video roleplays was needed in order to have a reliable assessment across the range of competencies. It is also important to have sufficient time with the actors to explain the MHPSS context and the need for level fluctuation for proper competency assessment. Fourth, in terms of the raters’ training, while 7 days of training were offered, the time and effort could have been invested more in practicum. Raters also reported that understanding each competency and the days of training were tiring, and that they found the process of reaching sufficient agreement stressful. This outcome might be attributed to the background of the raters, which saw them struggle to appreciate differences between what the actual competencies were versus their perceptions of competency. Even though raters had social science backgrounds, they had no prior exposure to observing and assessing the competencies of facilitators which created distinct context-specific concepts. Our main recommendation from these findings is that raters conducting the competency assessment should have formal prior experience in providing supervision of MHPSS interventions. We argue that the best people to be trained as raters on competency assessment are the actual supervisors of the MHPSS interventions. They would not only rate facilitators pre- and post-training, but also give direct feedback on performance and competencies in order to provide guidance and supervision accordingly.
In terms of inter-rater agreement, the IRR of the ENACT, GroupACT and WeACT improved and were generally acceptable after additional training when measured in levels (1–4). However, when measured in attributes (0/1), the IRRs were lower. Yet, the findings from the qualitative interviews showed a preference for the attributes format of the tools because the concrete description of behaviours was clearer for the raters. To increase IRR, we recommend concrete behaviour descriptions and training with sufficient time for practice for the raters to understand how to interpret each competency and their scoring system in both levels and attributes. Our findings show that raters found rating the video roleplays used for IRR purposes difficult due to the amount of details the scenes had that were not necessarily relevant for the purpose of the video (such as sound effects and showing the behaviours of the children rather than the facilitator). For the IRR videos, we recommend having simple scenes that focus mainly on the facilitator’s actions, limiting other kinds of visual or audio effects.
During the implementation phase, several challenges were identified. First, actors’ reluctance to adhere to the standard roleplaying script presented a barrier to script adherence and affected the display of certain competencies and ultimately the accuracy of the competency assessments. As in the preparation phase, actors would have liked to be better trained for the live roleplays to understand the MHPSS context better and be able to prompt all the competencies in the facilitators. They also found it challenging to display a character with suicidal thoughts. These findings might be attributed to the actors’ lack of experience with MHPSS concepts. While the use of professional actors was beneficial to make characters believable, having a good understanding of MHPSS concepts is also important for authenticity. It is also recommended that the roleplay scripts feature specific, easy to recall, short prompts for actors to elicit certain competencies (e.g. all actors making a statement such as “Sometimes I go to sleep at night and don’t want to wake up in the morning” to elicit competencies related to discussion of thoughts of death and suicidal ideation). Concerning the facilitators’ experiences, findings showed that they appreciated their participation and found feedback valuable in helping them improve their competencies. This was despite reporting feelings of anxiety during the live roleplay assessments. Adequately informing facilitators ahead of time about the purpose and process of competency assessments could be one way to reduce anxiety. Similarly, to avoid disappointment from low scores during the feedback process, we learned that giving feedback on assessment scores by sharing only assessment results is not the best way to give competency-based feedback. Using the learnings on feedback from this study and other sites, a feedback learning module on the EQUIP platform has been developed. This highlights the importance of considering why, when, and how feedback can be given, covering the competency assessment results in addition to the trainer’s and facilitator’s points of view (“Feedback in Competency based training”, 2022). Looking at Miller’s conceptual framework, the feedback process in terms of using the tool can identify things like ‘potentially’ harmful behaviours, and with feedback sessions guided by the tool, we can elicit and understand more about the existing dispositions and attitudes of the facilitator to then remediate competencies further.
The quantitative pre- to post- training changes support the facilitators’ improvement in competencies. Though in this article we are only reporting the phase of the study where raters assessed the competencies of the facilitators who received the CDT, in a later phase of the study we showed that when rated on the same competencies, CDT facilitators showed an 18% increase in competencies as compared to the facilitators who received regular training when raters were blinded to the training the facilitators received. It is worth highlighting that the competencies of the tools we are observing in this study are core skills competencies and not intervention specific. In previous studies, we found several benefits for using the tools and rating specific competencies, including allowing tailor-made feedback by identifying areas for improvement, increasing attunement between trainer and facilitator on concrete observable behaviours and increasing clarity on the benchmark needed for implementing a certain intervention (Falender and Shafranske, 2012; Jordans et al., 2022).
Given the limited resources in humanitarian settings and despite the fact that in this study the tools were paired with roleplays, they also have the potential to be used in real-world scenarios (i.e. during supervision sessions) and by experts such as the supervisors who can be trained on rating as mentioned earlier. Pragmatically, we recommend embedding the tools and its structured feedback approach within existing organisational trainings and supervisions, whereby MHPSS trainers can choose which tool and which competencies are most relevant to assess their facilitators on (Pedersen et al., 2021).
There are several limitations to this study. First, the same video was used twice to establish IRR. Because more than 1 month passed between the two time points, we estimate the effect on the IRR scores to be minimal. Second, the changes demonstrated before and after in facilitators’ competencies cannot be generalised given the small sample size. The results confirm the sensitivity to change using the competency assessment tools, as demonstrated in an earlier study in Palestine (Jordans et al., 2021) and the phase 3 study in Lebanon (Jordans et al., 2022).
In conclusion, our study suggests that roleplay-based competency assessments can be used to train non-specialists in providing MHPSS services, despite some challenges in attaining sufficient inter-rater reliability. To optimise the use of this approach, it is important to provide thorough training for facilitators, raters and actors on the concepts underpinning the competencies, the use of standardised roleplays for training and supervision purposes, and sufficient time for practice. Our results provide valuable insights for improving the EQUIP platform’s resources and tools for various purposes in low resource settings (Kohrt et al., 2020). Despite the fact that competency assessment tools are important in measuring the competencies of facilitators, which is a key factor in quality, they are not a replacement for quality of care measures and need to be used alongside existing measures (Quinlan-Davidson et al., 2021). Additionally, we advise that facilitator feedback and self-reported measures be considered in the field to complement the tools. Future studies need to be conducted on CDT to assess the relationship between the competencies assessed and the quality of care provided during implementation, for instance, trying to assess whether competencies during implementation function as a moderator or mediator for quality of care.
The authors are grateful to the participants in this study and colleagues at War Child Lebanon for their support and hard work. The authors would like to thank Marwa Itani for her support in the implementation of the study and Charbel Ghostine for his support in actors training.
Financial support and sponsorship
Funding for the WHO EQUIP initiative is provided by USAID. The views expressed in this article are solely the responsibility of the authors, and do not necessarily reflect the opinions, choices, or policies of the institutions they are associated with. BAK and MJDJ receive funding from the U.S. National Institute of Mental Health (R01MH120649).
Conflicts of interest
There are no conflicts of interest reported.
| References|| |
Aoun M., Steen F., Coetzee A., El Masri R., Chamate S. J., Pedersen G.A., El Chammay R., Schafer A., Kohrt K. A., Jordans M. J. (2023). The development and the delivery of a competency driven training. [Manuscript in preparation]. Research and Development department, War Child Holland
Brown F. L., Aoun M., Taha K., Steen F., Hansen P., Bird M., Dawson K. S., Watts S., Chammay R. E., Sijbrandij M. (2020). The cultural and contextual adaptation process of an intervention to reduce psychological distress in young adolescents living in lebanon. Frontiers in Psychiatry
, 11, 212.
Dawson K. S., Watts S., Carswell K., Shehadeh M. H., Jordans M. J., Bryant R. A., Miller K. E., Malik A., Brown F. L., Servili C. (2019). Improving access to evidence‐based interventions for young adolescents: Early adolescent skills for emotions (EASE). World Psychiatry
, 18 (1), 105.
Gale N. K., Heath G., Cameron E., Rashid S., Redwood S. (2013). Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Medical Research Methodology
, 13 (1), 1–8.
Hayes A. F., Krippendorff K. (2007). Answering the call for a standard reliability measure for coding data. Communication Methods and Measures
, 1 (1), 77–89.
Hoeft T. J., Fortney J. C., Patel V., Unützer J. (2018). Task‐sharing approaches to improve mental health care in rural and other low‐resource settings: A systematic review. The Journal of Rural Health
, 34 (1), 48–62.
Falender C. A., Shafranske E. P. (2012). The importance of competency-based clinical supervision and training in the twenty-first century: why bother? Journal of Contemporary Psychotherapy
, 42, 129–137.
Jordans M. J., Kohrt B. A. (2020). Scaling up mental health care and psychosocial support in low-resource settings: A roadmap to impact. Epidemiology and Psychiatric Sciences
, 29, e189.
Jordans M. J., Kohrt B. A., Sangraula M., Turner E. L., Wang X., Shrestha P., Ghimire R., van’t Hof E., Bryant R. A., Dawson K. S. (2021). Effectiveness of group problem management plus, a brief psychological intervention for adults affected by humanitarian disasters in nepal: A cluster randomized controlled trial. PLoS Medicine
, 18 (6), e1003621.
Jordans M., Coetzee A., Steen H. F., Koppenol-Gonzalez G. V., Galayini H., Diab S. Y., Aisha S. A., Kohrt B. A. (2021). Assessment of service provider competency for child and adolescent psychological treatments and psychosocial services in global mental health: Evaluation of feasibility and reliability of the WeACT tool in gaza, palestine. Global Mental Health
, 8, e189.
Jordans M., Steen F., Koppenol-Gonzalez G. V., El Masri R., Coetzee A. R., Chamate S., Ghatasheh M., Pedersen G. A., Itani M., El Chammay R. (2022). Evaluation of competency-driven training for facilitators delivering a psychological intervention for children in Lebanon: A proof-of-concept study. Epidemiology and Psychiatric Sciences
Kohrt B. A., Jordans M. J., Rai S., Shrestha P., Luitel N. P., Ramaiya M. K., Singla D. R., Patel V. (2015). Therapist competence in global mental health: Development of the ENhancing assessment of common therapeutic factors (ENACT) rating scale. Behaviour Research and Therapy
, 69, 11–21.
Kohrt B. A., Ramaiya M. K., Rai S., Bhardwaj A., Jordans M. D. (2015). Development of a scoring system for non-specialist ratings of clinical competence in global mental health: A qualitative process evaluation of the enhancing assessment of common therapeutic factors (ENACT) scale. Global Mental Health
Kohrt B. A., Schafer A., Willhoite A., Van’t Hof E., Pedersen G. A., Watts S., Ottman K., Carswell K., van Ommeren M. (2020). Ensuring quality in psychological support (WHO EQUIP): Developing a competent global workforce. World Psychiatry
, 19 (1), 115.
Koo T. K., Li M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine
, 15 (2), 155–163.
Miller G. E. (1990). The assessment of clinical skills/competence/performance. Academic Medicine
, 65 (9), 63.
Mills J., Middleton J. W., Schafer A., Fitzpatrick S., Short S., Cieza A. (2020). Proposing a re-conceptualisation of competency framework terminology for health: A scoping review. Human Resources for Health
, 18 (1), 1–16.
NVivo Q. (2012). NVivo qualitative data analysis software. Version 10. QSR International Pty Ltd
Ottman K. E., Kohrt B. A., Pedersen G. A., Schafer A. (2020a). Use of role plays to assess therapist competency and its association with client outcomes in psychological interventions: A scoping review and competency research agenda. Behaviour Research and Therapy
, 130, 103531.
Patel V. (2012). Global mental health: From science to action. Harvard Review of Psychiatry
, 20 (1), 6–12.
Pedersen G. A., Sangraula M., Shrestha P., Laksmin P., Schafer A., Ghimire R., Luitel N. P., Jordans M., Kohrt B. A. (2021). Development of the group facilitation assessment of competencies tool (GroupACT) for group-based mental health and psychosocial support interventions in humanitarian emergencies and low-resource settings. Journal of Educational Emergencies
. 7 (2), 335-376.
Pedersen G. A., Gebrekristos F., Eloul L., Golden S., Hemmo M., Akhtar A., Schafer A., Kohrt B. A. (2021). Development of a tool to assess competencies of problem management plus facilitators using observed standardised role plays: The EQUIP competency rating scale for problem management plus. Intervention
, 19 (1), 107-117.
Quinlan-Davidson M., Roberts K. J., Devakumar D., Sawyer S. M., Cortez R., Kiss L. (2021). Evaluating quality in adolescent mental health services: a systematic review. BMJ Open
, 11 (5), e044929.
Singla D. R., Kohrt B. A., Murray L. K., Anand A., Chorpita B. F., Patel V. (2017). Psychological treatments for the world: Lessons from low-and middle-income countries: Annual review of clinical psychology. Annual Review of Clinical Psychology
, 13, 149-181.
Watts S., Hall J., Pedersen G. A., Ottman K., Carswell K., van‘t Hof E., Kohrt B. A., Schafer A. (2021). The WHO EQUIP foundational helping skills trainer’s curriculum. World Psychiatry
, 20 (3), 449-150.
World Health Organization. (2007). Task shifting: Rational redistribution of tasks among health workforce teams: Global recommendations and guidelines
[Figure 1], [Figure 2], [Figure 3]
[Table 1], [Table 2], [Table 3]