Zhiqiang (Eric) Zheng's research interests include:
Business Analytics (Theories, Methods and Applications)
Data Mining Methods (DEA for outlier detection, state-space imputation, generative text mining)
Healthcare analytics (hospital capacity optimization, patient life time expense, clinic waste reduction)
Social Media Analytics (text analytics, signed social network analysis, crowdsourcing)
Financial analytics (high-frequency trading, streaming data analytics)
Information Technology Innovation, Diffusion and Standardization
Quantitative analysis for Operations, Marketing and Finance (Inventory optimization under subscription-based service, sponsored search auction, social gaming product diffusion, computational econometrics analysis for strategic traders)
Recent healthcare reform has focused on reducing excessive waste in the U.S. healthcare system, with duplicate testing being one of the main culprits. We explore the factors associated with duplicate tests when patients utilize healthcare services from multiple providers, and yet information sharing across these providers is fragmented. We hypothesize that implementation of health information sharing technologies will reduce the duplication rate more for radiology tests compared to laboratory tests, especially when health information sharing technologies are implemented across disparate provider organizations. We utilize a unique panel data set consisting of 39,600 patient visits from 2005 to 2012, across outpatient clinics of 68 hospitals, to test our hypotheses. We apply a quasi-experimental approach to investigate the impact of health information sharing technologies on the duplicate testing rate. Our results indicate that usage of information sharing technologies across health organizations is associated with lower duplication rates, and the extent of reduction in duplicate tests is more pronounced among radiology tests compared to laboratory tests. Our results support the need for implementation of health information exchanges as a potential solution to reduce the incidence of duplicate tests.
Zheng, Zhiqiang; Pavlou, Paul A.; Gu, Bin; 80691095 (Zheng, Z)
This paper presents and extends Latent Growth Modeling (LGM) as a complementary method for analyzing longitudinal data, modeling the process of change over time, testing time-centric hypotheses, and building longitudinal theories. We first describe the basic tenets of LGM and offer guidelines for applying LGM to Information Systems (IS) research, specifically how to pose research questions that focus on change over time and how to implement LGM models to test time-centric hypotheses. Second and more important, we theoretically extend LGM by proposing a model validation criterion, namely "d-separation," to evaluate why and when LGM works and test its fundamental properties and assumptions. Our d-separation criterion does not rely on any distributional assumptions of the data; it is grounded in the fundamental assumption of the theory of conditional independence. Third, we conduct extensive simulations to examine a multitude of factors that affect LGM performance. Finally, as a practical application, we apply LGM to model the relationship between word-of-mouth communication (online product reviews) and book sales over time with longitudinal 26-week data from Amazon. The paper concludes by discussing the implications of LGM for helping IS researchers develop and test longitudinal theories.