METHODS

R.A.P.I.D. (Root Aggregated Prioritized Information Display) [28] is a circular, single screen display that represents all data (Fig. 1A). The data are parsed into discrete data categories at fixed locations around the circle designated by small circles lying on the circum- ference of the larger circle. The smaller circles are all actually ‘‘buttons” that display the top report in a  stack  when the  cursor  is above a smaller circle. When the mouse is then clicked, that report is acknowledged and the next report is displayed. The num- ber in the each small circle represents the count of non-critical reports in that data category. Each critical report, as specified by the EHR and potentially by the provider, is represented by a num- ber in the small red circle outside the larger circle connected by a line to the corresponding data category with non-critical reports.

 

With new display, the user has a top down view of the entire stack of patient data represented in a three-dimensional structure composed of two sets of small concentrically arrayed cylinders.  The new display distributes all reports into the cylinder of the appropriate data category. Any report with a critical result is counted in the outer cylinder array. All non-critical reports are counted in the inner cylinder array. For both critical and non-critical reports, each cylinder is variably filled with reports.

This one-screen display instantly reveals the complete set of critical reports that require urgent review (Fig. 2) located in the outer cylinder array and highlighted in red. Rather than a multiple screen search, the critical reports can be acknowledged serially, and quickly, each with a single mouse click from a single screen (see the accompanying video [29]).

 

The new display system is a stand-alone software application coded in the Java computer programming language because of Java’s excellent graphic display characteristics. The new display uses the My SQL database to manage login IDs and passwords. As mentioned above, with an appropriate interface, the new display system can represent all data from multiple EHR systems on a single screen and thus enable timely practitioner response to the EHR system that originated the report.

 

Fig. 1A displays 200 R.A.P.I.D. reports that include both non-critical results (20 in Hematology, 157 in Chemistry, 7 in Coagulation, and 8 in Microbiology) and critical results (3 in Hematology and 5 in Chemistry). In the example in Fig. 1A, the displayed report includes a critically high WBC at 34.9. As the cursor is hovering over the circle designating the critical hemato- logy report, the full report appears on the right side of the screen, and a smaller version of the original new display persists on the left side of the screen. Abnormal, but non-critical, results are high- lighted in yellow. To acknowledge a result, the computer cursor is placed over the small circle corresponding to the data category of interest and clicked (or touched on a touch-screen). If the circle indicates more than one result, then a second mouse-click opens the second report in the stack, and so on. The individual reports are arranged in a queue with the most recent reports located at the bottom of the respective critical or non-critical category For the purpose of understanding the novel display, the reports can be thought of as being ‘‘stacked” in a virtual third dimension and quantified by the number in the relevant circle.

Meditech [Meditech Health Care Co., Westwood, MA 02090] EHR, as configured at the Mount Nittany Medical Center, displays reports in a tabular format and flags critical results in red and abnormal but not critical results in yellow (Fig. 1B). Orchard Laboratory [Orchard Laboratory Information System, Carmel, IN 46032] EHR, as config- ured at the Mount Nittany Medical Center, also displays reports in   a tabular format and flags critical results in red (Fig. 1C).

 

Fig. 2 demonstrates across the top of the figure the invariant shape of R.A.P.I.D., regardless of the size of the data files. The ‘‘stack” height of the data files increases from that of 4 reports on the left, to 112 reports in the centre, to 12,979 reports on the right. As mentioned above, the actual data structure represented as a cylinder is actually a three-dimensional structure composed of two concentric arrays of cylinders. Each cylinder contains the reports of a different data category and contains a varying number of reports. Critical reports are represented in the outer cylindrical array in red and non-critical reports in the inner array. The increase in data density going from left to right in Fig. 2 is indicated by the corresponding increase in grey scale.

Critical data are signed off immediately by the appropriate healthcare provider for the single patient  data  (left  column  in  Fig. 2); for the entire data set from the provider’s practice (centre column in Fig. 2); and for the entire data set of multiple provider practices (right column in Fig. 2). Beyond provider use, the multi- ple practice dataset in the new display format may also be useful elsewhere in health care delivery (see Section 5).

 

Fig. 3 shows how the new display, with the same data as in   Fig. 2, can help to define workflow to improve practitioner efficiency, resource utilization, and patient safety. Providers sign off critical reports immediately. Non-critical reports can be signed off later by the provider or signed out to other members of the health-care team for sign off. The critical/non-critical data triage option is useful to any team or user of data in health care. An example includes an ICU nurse who monitors patient status with the novel display enabled triage of defined critical results to the responsible ICU physician for action. Another example is a nurse practitioner that uses the novel display to screen large patient populations for compliance with surveillance health screening.

With both adjustable time-windows and adjustable critical values the new display enables any user a one-screen representa- tion of the entirety of the user’s universe with data defined as critical, in part, by the user on the fly.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

3.1 Study Design

From a data set of 30,797 de-identified, random laboratory results from the Meditech EHR we created a file or queue of 100

reports (designated enhanced Meditech or EM). From a similar data set of 9914 de-identified, random laboratory results from Orchard Laboratories we created a second file or queue of 100 reports (designated enhanced Orchard or EO). Each report remained in the format of the originating EHR  system  (Fig. 1). The same 200  reports were  queued in a  file and represented in  the novel display format (Fig. 1) but displayed in the format of    the original EHR (Meditech or Orchard). The report queues for acknowledgement in all three systems (EM, EO, and R.A.P.I.D.) had the same structure. The reports were placed in the appropriate categories, i.e. Hematology, Chemistry, Coagulation or Microbio- logy. The first click opened the first category (Hematology). That click also displayed the first report at the top of the queue for acknowledgement. The next click acknowledged the first report and also displayed the second report and so on. After the last report in a category was acknowledged, the next category was opened and so on until all, either 100 or 200 (in R.A.P.I.D.) reports, had been acknowledged (see Fig. 1). We included 12 physician providers and 30 non-physician providers who volunteered to measure the accuracy and speed in report sign-off. Study partici- pants were asked to assign reports to the categories critical and non-critical (including both normal and abnormal but not critical). Each participant reviewed 100 reports in EM, 100 in EO, and 200 in R.A.P.I.D., on a common test computer in a timed manner, and the outcome was the average time in seconds required to sign off each report by a click in all cases. Accuracy was determined to be the compliance of the critical/non-critical assignment for a  report by   a study participant with the critical/non-critical assignment  by  the respective EHR system.

The data categories and data ranges that we use to designate a result as critical were those of the original EHR system that provided the report, as applied at the Mount Nittany Medical Center in State College, Pennsylvania.

The data were de-identified prior to distribution to the investiga- tors and consisted of laboratory data only. The Mount Nittany Medical Center Institutional Review Board determined that the study of de-identified data did not require Institutional Review Board approval. Test subjects were selected and agreed to participate. Their survey responses were also de-identified as to test subject and are stored on two secure computers with no other dissemination.

Initially, a group of 12 physician providers signed off the three report groups. The physicians used the R.A.P.I.D. system that normally has only an acknowledgement function for review of reports. The error rate for the physician provider group with R.A.

P.I.D. was therefore zero. Reports in R.A.P.I.D. with critical results are queued for review before reports containing only non-critical results. Both EM and EO systems were provided with a  critical  and a non-critical button so that the test participant could change the critical/non-critical designation of a report.

 

For the purposes of this study only, R.A.P.I.D. was then altered to allow a second group of study participants to reassign system designated critical/non-critical results. The new display acknowl- edgement function was therefore replaced with critical and non- critical buttons for sign-off. Then 30 nurses, nurse practitioners and physicians’ assistants (non-physician providers) with non- critical and critical buttons signed off the same set of reports in each of the three systems.

The order in which each subject reviewed the stack of reports was randomized, using a block randomization scheme to balance the order in which the systems were assigned. The order in which each subject was assigned to the EHR systems was randomized (Supplementary Materials Appendix Table 1 [30]). This constrained randomization scheme balances the cross-over design to ensure a fair comparison among the three methods,  thus  each  method  was used an equal number of times, as the first, second, and third system, by the test subject.

The analyses of the results of the 12 physician providers and 30 non-physician providers were conducted by fitting a three-way, mixed-effects, analysis-of-variance model which compared the mean time to review the laboratory data, measured in millisec- onds. The fitted model was conducted on the logarithm of sign-   off time with a model for the crossover experimental design using the rater (physician providers or non-physician providers) as a random effect, and which provided a test for a possible significant system and/or order effect.

To determine accuracy of the provider, the  results  for  each  test subject were scored for the  frequency  of  reassignments  of  the EHR system defined non-critical to critical and of critical to non-critical for all three EHR formats  separately.  The  frequency of reassignments among the three systems was compared using  the Chi-square test for equal proportions.

Fig. 1. Panel A shows a R.A.P.I.D. image of 200 unacknowledged reports, limited to laboratory results, for this survey. The reports are first divided into 11 categories arrayed at fixed locations around a circle. Within each category the reports are then partitioned into critical reports enumerated in a small circle lying outside the larger circle  and  connected by a line to a small circle on the circumference with the partitioned, corresponding non-critical reports. The placement of the cursor over the small circle with the number ‘‘3” (critical hematology) displays the first report (‘‘top of the stack”) on the right. The report remains at the top of the stack or queue until a click signs off the report   and the next report in that data category queue appears on the right of the screen for sign off. In this survey, the data delivered to R.A.P.I.D. consists exclusively of laboratory data. For part of this survey critical and non-critical buttons  have been added for sign-off. Panel B shows a Meditech hematology result for this survey with critical values  flagged in red and abnormal but non-critical results flagged in yellow. Panel C shows an Orchard chemistry result for this survey with critical values flagged in red.

A R.A.P.I.D. for Survey

B Meditech Display for Survey

C Orchard Display for Survey

                                   Fig. 1

                                   Fig. 2

Fig. 2. The constant R.A.P.I.D. images at the top of the columns represent all the information within the underlying column. See Fig. 1A for constant data category assignments of each of the three R.A.P.I.D. images. The display of 4 reports of a single individual is in the left column. The display of 112 reports for a single practice is in the centre column. The display of 12,979 reports from multiple practices is in the right column. The increasing height and grey scale of the cylinders reflects the increasing data density going from left to right. The reports are understood to be in a virtual ‘‘stack.”

                                   Fig. 3

Fig. 3. Shows how the new display can improve workflow in medical data management. Critical reports are acknowledged immediately by the responsible practitioner or provider. Non-critical reports are acknowledged routinely by the provider or triaged to other members of the health-care team.

© 2018 by Triaged Data Display, LLC

|Web Design by Mark Husbands|