BACKGROUND

Team-based healthcare represents a fundamental shift in the way healthcare organizations deliver care. In primary care, for example, the traditional care model involved a physician assisted by a nurse, with the physician assuming primary responsibility for the patient. In a team-based approach, a care team consisting of multiple professionals, including physicians and physician extenders, nurses or care managers, and other resources such as pharmacists, nutritionists, and mental health professionals, are all collectively responsible for the patient.1 , 2 Team-based care involves more complex coordination among clinical staff, which tends to be more difficult to perform to standard than work not requiring coordination;3 , 4 it also involves new roles, responsibilities, and relationships among existing clinical personnel.2 , 5 , 6 Healthcare facilities run by the US Department of Veterans Affairs (VA) recently transitioned to a team-based model of primary care, known within VA as the Patient-Aligned Care Team (PACT).

The VA has markedly improved quality of care and clinical performance in the last decade through clinical performance measurement (evidence-based, quantitative indicators of the quality of health care delivered)7 and audit and feedback (A&F),8 14 which involves measuring an individual’s professional practice or performance, comparing it to professional standards or targets (in VA’s case clinical performance measures), and delivering results of this comparison to the individual.15 Recent health services research has finally begun to unpack factors that make A&F more effective in clinical settings.10 , 15 , 16 However, most A&F research uses the individual as the unit of analysis—even studies comparing group-level aggregations of feedback versus individual-level feedback16 , 17 assume that the recipient is an individual.

Management-based and psychology-based research suggests that goal setting and feedback to a team might require different strategies to achieve effectiveness; though the evidence is somewhat scarce, the critical issue is likely related to clarity over how individual contributions impact team goals. In an individual setting, feedback directs individual attention to details of the task, thereby affecting subsequent goal setting and performance. In a team setting, however, providing individual feedback alone would direct the individual’s attention to the task, but provide no information about how changes in the individual’s performance impact team outcomes, which partially depend on the individual performance of others. For example, Mitchell & Silver18 found that giving individual goals to members of a team decreased team performance; along similar lines, Crown & Rosse19 observed that “groupcentric” goals (individual goals focusing on contributions to team performance) combined with team goals led to the highest team performance. Finally, DeShon and colleagues20 noted parallel processes for individual-level and team-level goals and feedback, with team members performing to whichever feedback level provided the most and highest-quality feedback: those receiving individual-level feedback performed best at individual-level goals and measures, whereas those receiving group-level feedback focused on team performance. Team members receiving both types of feedback, however, did not perform as well at either level as those receiving only one type.

To the extent that the team structure aims to empower its members to provide quality care, the literature suggests aiming feedback practices at teams rather than individuals. However, little data exist to determine whether current feedback practices are aligned well to support teams. A clearer understanding of current practices in the care team setting is therefore needed to optimize feedback effectiveness.

In this article, we describe how A&F is delivered in an increasingly team-based primary care environment with a strong history of provider-level A&F. We report experiences of primary care clinicians and leadership at 16 VA Medical Centers, to identify changes in A&F practices occurring alongside PACT implementation.

METHOD

This study was part of a larger funded research project examining differences among high, low, and moderately performing facilities regarding feedback strategies, feedback characteristics, and feedback-related organizational culture. Detailed methods for this project are published elsewhere and summarized here.9 Our local Institutional Review Board approved this study.

Design and Setting

The primary study examined telephone interviews with facility leadership and primary care personnel at 16 VA Medical Centers, selected purposively to represent a variety of geographic regions and outpatient clinical performance levels.9 The current paper explored broad changes in clinical performance feedback associated with PACT implementation, irrespective of differences in facility characteristics. Nationally, VA has made certain tools available that facilitate delivery of clinical performance data to individuals, including the Primary Care Almanac (a panel management information tool) and the PACT Compass (used to track indicators such as coordination), along with several other reporting tools. How these tools are implemented and used, however, is left up to individual facilities, as is the case for other clinical performance feedback practices.

Participants

At each facility, we sought to interview four informants: the facility director, the associate chief of staff (ACOS) for primary care, one full-time primary care physician, and one full-time primary care nurse.

Procedure

Interviewer Training

The principal investigator (an industrial/organizational psychologist) and co-investigator (a general internist) instructed interviewers in interviewing techniques; both are experienced in interviewing and qualitative research. Instruction included a didactic session on interviewing technique, observation of interviews conducted by the principal investigator (PI) and co-investigator, and two mock interviews with critique. A master’s-level industrial/organizational psychologist, a registered dietitian, and two bachelor’s-level health-science specialists with backgrounds in biology and sociology (respectively) comprised the interviewing team.

Participant Recruitment and Telephone Interviews

We invited prospective participants via e-mail to enroll in the study. Those agreeing to participate after initial or follow-up contact were scheduled for a consent discussion and interview. Trained research assistants interviewed each participant for 1 hour, using a semi-structured interview guide. Participants answered questions about the types of External Peer-Review Program (EPRP) and other quality/clinical performance information they receive and actively seek out, opinions and attitudes about the utility of EPRP data as a form of feedback, and ways they use this information. EPRP is a nationally abstracted database containing performance data for all VA medical facilities on over 90 indicators covering access, quality of care, cost effectiveness, and patient-satisfaction domains; data are abstracted monthly and reported quarterly.21 EPRP is the official data source for VA’s clinical performance management system, providing indicators that leadership uses to gauge performance and make administrative decisions about facilities in their networks.

Using a constant comparative approach, we identified several PACT-related themes emergent in the first third of interviews; we iteratively adapted our interview guide to capture additional information about the extent to which PACT had been implemented, and changes in clinical performance feedback since PACT implementation. (See Appendix for PACT-related interview questions and distribution of interviews across interviewers and interviewees.) An independent service transcribed interview recordings; interviewers cross-checked transcripts to recordings for accuracy.

Data Analysis

We analyzed transcripts using techniques adapted from grounded theory and content analysis, using Atlas.ti v. 6.2.22

We conducted automated searches of transcripts for PACT-related terms to aid in later, more manual coding for thematic content. We then identified and categorized direct responses to our PACT-related questions from the interview guide. We followed this initial coding with a manual transcript review (aided by terms tagged previously in the automatic search process) in search of passages providing answers to our PACT-related questions, even if they were not the direct result of a PACT-related question. Coded passages were then categorized according to major emergent themes, reviewed for negative cases, and a central story was identified.

RESULTS

Our analyses indicated four primary themes emergent from the data, based on 48 interviews from 16 sites: (1) ownership of clinical performance still rests largely with the provider; (2) the Primary Care Almanac is the most prominent change to clinical performance feedback (aggregation and information dissemination), with decreasing reliance on periodic EPRP reports; (3) existing feedback tools are seen as most effective when monitored by nurse members of PACTs; and (4) facilities report no appreciable changes to assessment of clinical performance since transitioning to PACT.

Provider “Ownership” of Clinical Performance

The PACT model represents a shift from traditional approaches, where responsibility for clinical outcomes rested primarily on the primary care provider. However, the strongest theme encountered in our data was that, although great efforts have been made to transition to a team-based model of care, feedback about clinical performance is still structured largely according to the individual-provider model. For example, in contrast to data on measures related to PACT implementation, generally shared with all team members, access to clinical performance information for nonprovider team members depended largely on the provider. Specifically, facility-generated reports of clinical performance data were considered disseminated to a team when provided to the team’s physician:

In my clinic, … we distribute the data to the team, which usually gets handed to the providerbut it stays in my hands only momentarily before my RN takes it to begin getting into the meat of thethe information and identifying who to call and who to arrange for labs for; things like that.

Site J Primary Care Director

Often, interviewees’ language suggested that the provider was considered the owner both of the team (e.g., “my RN”) and its clinical performance outcomes. The idea that providers are the core of the team and that they will “have their own RN, LPN, and clerk to assist” them (Site G Nurse) was a commonly held perspective. A nurse at Site D also noted that, when comparing data, they may look to see if “Dr. As patients are getting better compared to Dr. Bs patients and all that,” indicating that the team’s performance was defined in terms of the provider. This language is in contrast to the stated ideals of facility leaders that all PACT members take ownership of the clinical-quality outcomes of their patients:

One of the chief responsibilities of me and my staff is to try and work with people. It doesnt happen overnight. But to try and change their perspective or their thinking on how they function. First and foremost is team, not as necessarily the physician driving or the provider calling all the shots, but bringing everyone up to the higher level of performance, and its a work in progress.

Site M Primary Care Director

In terms of priorities, it is our top priority that that we assess and report clinical performance, um, so that our PACT teams and teamlets cancan know that and continue to improve on their clinical performance.”

Site K Primary Care Director

However, there seemed to be divergent views between leadership and clinicians as to who receives or should receive feedback. For example, at one site, the ACOS discusses how (s)he limits access to certain clinical performance data to providers, and prevents providers from seeing one another’s data, with no mention of connecting other primary care team members with the data:

Theres a report also on Veterans Support Service Center (VSSC)…. But you can also get this report from Computerized Patient Records System (CPRS); each provider can pull their own data on whats called the primary care almanacThey cant see all the VSSC stuff unless they get access to that database, and I havent given them access to that because like I said before; they will see everybodys [every providers] data. So I get that stuff and I send it to our primary care leaders and they send it out to everybody.

Site D Primary Care Director

At that same site, however, the nurse interviewee reported petitioning his/her facility leadership to gain access to the Almanac, because (s)he saw it as essential to his/her new panel-management role:

We, myself, my supervisor […], kind of petitioned with leadership and said,…the nurses need to be able to access at least limited information in the Almanac. …how are we supposed to manage a high patient cohort that has significant physical and emotional and mental and all problems with all these comorbidities when we can’t even find out who they are until we ask the doctor to run the report? Whereas if I have access to the Almanac, I go in at any time I need.”

Site D Nurse

Another example of this is Site J, where the facility director somewhat hesitantly indicates that the facility targets data to teams, the ACOS notes the facility targets data to teams by offering it to providers (quoted previously), and the nurse wishes for direct access to data and the technical knowledge to use it effectively.

What I can tell you is the tools that are being built for comparing performance across teams are now shared, so our historical model is we would just engage the providerand now weresharing that information, uh, onprobably, uh, on thewith the team; not just the provider.

Site J Facility Director

It has been mentioned and I think we signed up to get access [to the Almanac], but thats all. …and we may have had like a little brief in-service, but, you know, it didnt translate to anything. …in the ideal world I think this [panel management] would be under my job description; that I would be tracking them, and that they wouldn’t be getting lost; and, you know, I had some great big huge database that I was allowed to do that, and chronic disease management, I guess. …. I may have the tools available to me. I have no idea how to use them.

Site J Nurse

The Primary Care Almanac as a Feedback Tool

Interviewees reported that the Primary Care Almanac was introduced concurrently with PACT implementation. Data in the Almanac can be viewed in aggregated form at multiple levels, including by facility and provider (though not by team or individual team member—some team members serve multiple PACTs). The Almanac can, therefore, be used as one tool for assessing and feeding back information about overall clinical performance. The PACT Compass was also widely referenced; however, it primarily reports on nonclinical indicators beyond the scope of this article.

Attitudes toward the Almanac varied greatly, ranging from perceptions that it is a key tool for achieving clinical improvement,

The most profound change has been the availability of the Almanac. The Almanac is, as you know, thea way for eachprovider to look at his or her own group of patients, if you will, their flock, and to see how everyones doing and who specifically is not doing welland so I think thats probably the most profound and powerful toolthat we have now at the provider level.

Site N Primary Care Director

to preferences for home-grown tools instead:

The dashboard is similar to the Almanac. Its a very nice system. You can drill down from the entire region to the VISN [Veterans Integrated Service Network] to the site to the provider. …You can put anything on the dashboard you darn well please, and it comes up in a very nice web-based formatTheres red stop signs and yellow triangles and green diamonds to show you, graphically, your performance, with the statistics associated with those; and for any given provider, you can pull up the data any way you want by a few simple clicks. …The dashboard for us is sharedon our website so our nurse care managers and our clerical staff can get in there, and we have a very coordinated approachthats why we really like the dashboard, because its a very detailed and effective tool that can be accessed and used by a lot of different people to work on the same goal, andit updates instantaneously, whereas the EPRP we have to wait quite some time to see our statistics improve.

Site E Physician

Others have abandoned use of the Almanac, citing multiple reasons, such as staffing shortages, timeliness of data, and alignment of Almanac data with the facility’s goals:

I mean, another problem is when you try to use a tool; and it doesnt meet your needs. Then Im sort ofyou know, were kind of done with it. You know, unless somebody came back and promoted it and said, you know, weve made all these great updates to it; now its more useful to you.

Site J Physician

At facilities where more robust data-dissemination tools than the Almanac existed prior to PACT, giving nonproviders access to these tools was viewed as key to effective implementation of the PACT model.

Nurses and Clinical Performance Feedback

Physicians noted that they lack time to review clinical performance data with sufficient frequency; to some extent, the PACT nurse care manager role has emerged to fill this gap. Whether because of lack of time or interest in ensuring appropriate follow-up, many physicians perceived clinical performance feedback tools as most useful to physicians when another person was available to monitor and manage their information. Facilities reported some improvements in clinical performance outcomes when feedback was made available to nurse members of the team, particularly in cases where provider data had previously been “in the red.”

What we find is that when the RNs are where we distribute the data to, particularly, we made a lot of in-roads on the hemoglobin A1C parameter, because just identifying who needed to come in and have blood work shifted the numbers significantly and by just having the RNs go through the data, identifying those patients who needed to come in for labs and arranging for them to come in for labs was a very successful intervention.

Site J Primary Care Director

Changes in Clinical Performance Assessment Since Transition to PACT

When directly asked how assessment of clinical performance had changed since transitioning to PACT, interviewees often reported that “indications of quality of care are the same under PACT” (Site C Facility Director). For example, one facility director noted that although new PACT-related performance measures related to chronic disease management, access, and satisfaction had been added, actual measurement of clinical quality had not changed. In addition, interviewees sometimes interpreted the question as inquiring about changes to their facility’s actual performance on quality measures, and answers ranged from uncertainty:

with regards to EPRP and clinical-practice outcomes, Id have to say the jury may be still out in terms of the way that […] the implementation of PACT has made any changes.

Site A Facility Director

to no apparent effect.

the implementation of PACT has not affected our clinical-outcome results.

Site E Facility Director

DISCUSSION

We sought to identify changes due to PACT implementation in clinical performance A&F to primary care clinical personnel. Despite deployment of new reporting tools and leadership’s desire to feed back clinical performance to the entire team, our findings indicate ownership and responsibility for clinical performance still rest largely with the provider. Further, though some of these new tools provide features desirable to quality feedback, such as the capacity to individualize and customize,10 access to them is limited to providers and leadership in certain facilities. The premise of the PACT model is that clinical-quality outcomes depend on the actions of all team members, yet facilities’ approaches to clinical performance feedback did not reflect this.

Although many facilities cited a need to increase PACT “ownership” of patient panel clinical outcomes, current systems of clinical performance feedback (including the Primary Care Almanac) imply provider rather than team ownership of data. Although the concept of delivering data directly to all team members was supported by facilities in principle, we saw little evidence to suggest this was fully implemented at the time of our interviews. Yet, our interviewees considered existing clinical performance feedback tools as most useful when targeted toward nonprovider team members. One possible explanation for current practice may be an assumption that, if data are delivered to the provider, then, by definition, they have been delivered to the team.

Implications

Our findings suggest a misalignment between operations’ vision of feedback to PACTs and the feedback culture in the clinic, highlighting the need to align clinical-feedback systems with PACT strategic objectives (or those of any Patient-Centered Medical Home [PCMH] outside VA). For example, one explanation for the observation that feedback tools work best when targeted toward nonproviders could be that current measures capture portions of the clinical performance domain that are more effectively influenced by nonproviders than by the providers who have traditionally been recipients of such feedback.

On a more practical level, our findings suggest the need to not only grant individual team members direct access to clinical performance feedback tools, but also to structure the data within said tools to the individual team member, so that he/she can monitor specific patients for which he/she is responsible; merely trading a provider’s name for a team designation in clinical performance databases will not increase the likelihood that clinical feedback will reach the most appropriate PACT member’s hands. Modifications to this tool at a national level may be warranted so that data are instead organized and accessible by the appropriate team member, to better align it with the principles of the PACT/PCMH model (for example, a pharmacist who services multiple PACTs should be able to see data for each PACT he/she serves, with capability to disaggregate to the patient level). This is consistent with the approach taken with clinical reminders within VA, with reminders delivered to the individual responsible for handling the clinical issue in question (e.g., clinical reminders for tasks regularly done by nurses, such as tobacco screening, are received by nurses but not providers). Several questions require answers for this to be accomplished, including which PACT members play a part in effecting change to each clinical performance measure, and how much interaction is appropriate for each team member to have with such information.

Finally, feedback linked to team roles is part of a broader transformation of any clinical team (in or outside VA), involving shared discussion and planning about how to respond to performance feedback. If, as DeShon and colleagues suggest,20 there are parallel processes for individual and team-based feedback and goals, then simply stopping at delivering clinical data to the team (i.e., knowledge of results), even if it has the qualities of actionable feedback,10 is insufficient to ensure improved quality. The team must reflect on the feedback as a team and plan as a team how to address quality gaps observed for the feedback to have maximum impact.20 Such reflection could occur, for example, in the daily PACT huddle; however, in our interviews, we did not observe this to be a universal practice.

Limitations

This study had several limitations. First, it consisted of cross-sectional interviews with questions about how performance feedback had changed since PACT, with threat of recall bias and limited ability to detect it. Second, 16 of the originally targeted 64 interviews were not conducted, because of declined invitations, ineligibility, and/or a limited pool of potential interviewees for a given role at a given site. Reasons for declining varied considerably; often no reason was explicitly given. However, we saw no appreciable differences in numbers of nonrespondents across roles.

The original study was primarily interested in clinical performance feedback to physicians. Thus, our clinician interviews included only physicians and registered nurses in primary care and excluded other primary care providers, so our data did not include the perspectives of individuals in these roles (e.g. nurse practitioner, physician assistant); however, these roles do have their own patient panels, and thus, receive clinical performance data through the tools mentioned here in the same way as physicians. In addition, although we interviewed nurses, we did not hear from nursing leadership in our interviews; so the perspectives here are predominantly from primary care physicians and administrative leadership.

Conclusions/Future Directions

We conclude that although new tools have been created to support higher-quality clinical performance feedback concurrent with adoption of PACTs, they are not as effective at meeting the feedback needs of clinical teams as they could be, both due to clinic culture dynamics, as well as specific features of these tools (e.g., individualization to shared PACT members). Future research should seek to unpack the nuances of team-based A&F, including issues such as appropriate distribution of clinical tasks and clinical performance feedback to each member of the team, and the relationship among individual, group, and group-centric clinical performance goals and feedback. Without a system delivering appropriate feedback to all PACT members at both individual and team levels, it may be difficult to achieve the intended vision of the PACT model of care.