Entries in bigdata (7)

Monday
Nov022020

Econometric Techniques Applied to Orthopaedic Datasets

It’s fun when interests collide. In a recent podcast interview with Davida Dinerman[1], we reviewed how my academic and career experience have led me to where I am today, with an emphasis on developing and disseminating information for decision support in healthcare. We covered a lot of ground in my non-linear career path, but Davida perked up when she heard me mention the term econometrics, which was new to her.

Then, this past weekend, I listened to the Orthopod podcast[2] on the topic of using real-world evidence (RWE), in this case, a Dutch registry that includes over 400,000 orthopaedic patients. Dr. Mohit Bhandari (@orthoevidence), the host, talks with Dr. Rudolf Poolman (@rudolfpoolman), who describes how he was able to question a guideline that calls for an age cut-off for cemented vs. non-cemented hip hemi-arthroplasty, using a research technique borrowed from econometrics.

Both podcasts cover other topics, including empowering patients by including them in research design (Orthopod) and providing access to medical information and data (my interview with Davida), as well as shared decision-making and clinical decision support/guidelines. I encourage you to add both episodes— and series—to your podcast list.

And, I thank Davida for asking me to describe my background, so that I can now point people to the LookLeftforGrowth podcast episode if they want to know how I ended up with my unusual mix of technical, analytic, behavioral economics and market research skills.

I haven’t made direct use of econometrics in my work since I left business school—a long time ago. Nonetheless, my training in econometrics, economics, statistics, mathematics and French(?) have given me a foundation in and perspective on big data and analytic methods that I rely on frequently for envisioning and assessing new research methods in medical and life science research. When listening to the referenced episode of the Orthopod podcast, I felt a sense of satisfaction that my stack of skills[3] has value in today’s big data-enabled, evidence-based medical research environment.

 


[1] https://www.lookleftforgrowth.com/podcast/episode/487d00f0/janice-mccallum-on-the-promise-and-challenges-of-healthcare-data. Relevant discussion occurs between 3 min 15 seconds and 5 min 40 seconds.

[2] https://myorthoevidence.com/Podcast/Show/84? Relevant discussion starts at 14 min 9 seconds.

[3] https://www.theladders.com/career-advice/skill-stacking-instead-of-mastering-one-skill-build-a-skill-set

Thursday
Jan152015

Springer Science+Business Merges with Holtzbrinck’s Macmillan Science Group

One could get dizzy trying to trace the M&A history of Springer Science+Business. I recall analyzing their likely future back when they were owned by two private equity companies, Cinven and Candover in the mid-2000s. Cinven & Candover had formed Springer Science + Business by merging Kluwer Academic Publishing with Springer in 2004. In late 2009, Cinven & Candover  sold Springer Science + Business to EQT, a Swedish private equity firm, for 2.3 B EUR (note, Springer held 2.2 B EUR of debt at the time).

In 2013, EQT sold Springer to BC Partners for 3.3 B EUR.

Today, it was announced that BC Partners will merge Springer with Holtzbrinck Publishing Group’s Macmillan Science Group (well, almost all of MSG). Holtzbrinck becomes majority investor with a 53% stake; BCP retains minority ownership. Derk Haank, the CEO of Springer, will become CEO of the combined company. Annette Thomas, current CEO of Macmillan, will serve as Chief Scientific Officer.

The combined entity will have 13,000 employees and annual sales of 1.5 B EUR ($1.7 B US).

The rationale for the merger centered on the need for market share in the scholarly publishing segment. Derk Haank is quoted as saying, “Together, we will be able to offer authors and contributors more publishing opportunities and institutional libraries and individual buyers will have more choice. The expected economies of scale will allow for additional investments in new product development.” 

Scale and market share are becoming increasingly important as large publishers, including Elsevier, Wolters Kluwer, and Wiley, along with Springer, are competing to acquire titles from scholarly societies and other small publishers. Having control over the high quality scientific and medical journal content allows the big publishers to reinforce the value of bundled access to their collective publications (as well as subsets of their publications). But, perhaps more important, by accumulating rights to a significant share of the journals that serve as the arbiters of quality research— and by association, quality scientific and medical evidence—these leading scientific, technical and medical (STM) publishers will remain essential to any analytics engine that aims to mine the universe of important research results.

In brief, scale affords digital information businesses far more options in their choice of business models than would be available to a small publisher. Springer Science + Business now has Nature and its portfolio of journals in its camp. Together, they should be able to better compete in a segment that will continue to consolidate.

Thursday
May302013

Health Data Meaningful Updates

I’ve been so busy with guest posts and speaking engagements in the past couple of months that I’ve neglected updating my own site. I’ll try to rectify that now with condensed versions of some recent activity below.

       I.            Navinet Expert Interview Series, March 2013

Laura McCaughey and I discuss big data, population health, health IT, shared decisionmaking, the Accountable Care Act and medical cost trends, all in under 2 pages! A few outtakes:

 

Laura: What do you see as the biggest developments in HIT in the next year?

Janice: “I think the biggest developments will occur as provider organizations build upon the population health analysis that got its start with the foundation laid by the Meaningful Use framework. In particular, we’ll see more analyses of treatment plans, costs, and outcomes by segments of patients. The segmentation possibilities are almost endless. When combined with genomic data and other nontraditional types of data, they will bring us a long way toward the goal of personalized medicine.”

Laura: There’s been so much talk around big data for a variety of industries, but what does it mean for the healthcare industry?

Janice: “…to benefit from many of the existing Big Data technologies and modeling that are being used in retail, financial services, and other industries, the health care industry needs to improve the amount of collaboration at the level of sharing data sets and sharing results from previous analyses. Obviously, there are some limitations on how patient registries can be shared, but there is good progress in creating large research datasets that include de-identified patient data. In fact, the Agency for Healthcare and Quality recently released a registry of patient registries (RoPR).”

Laura: Last year you identified the accountable care organization (ACO) model as one of the major factors to shape care collaboration. How much of that has happened, and how much further do we need to go? 

Janice: “I think most ACOs have just scratched the surface in establishing a new model of providing care and involving patients in decisions about their care. It will take some time for the culture of physician-patient communication to change. Furthermore, the tools that have been available to educate and support clinicians and patients haven’t kept up with the organizational changes. In particular, patient education/patient information tools and materials are sorely lacking for patients who want to take a more active role in their health and medical care. I cringe every time I hear that patient education materials have to be prepared to meet the reading level of the “lowest common denominator” in the spectrum of patients. While I understand that some public health messages must be understandable to a very broad spectrum of the population, the same rationale doesn’t apply to all information made available to patients.”

Laura: What are some of the key components that a HIT platform needs in order to be successful in today’s changing healthcare landscape? 

Janice: “ACOs and the so-called patient-centered medical home (PCMH) concept should put a high priority on configuring their systems so that patients can both contribute information and download information from their records. This way patients can act as their own up-to-date “mobile record.” Not all patients are ready to take on this role, but that’s not a good reason to prevent those patients who are ready from improving access to information that can improve the quality of care they receive and possibly reduce the cost. The early innovators among the patient populations who actively track, update, and analyze their personal health records can serve as models for the “laggards” who will wait until the benefits become more obvious and the tools become easier to use.

Laura: The Accountable Care Act (ACA)will soon be implemented, and millions of newly insured Americans will be receiving care that did not previously. How are payers planning to handle this? 

Janice: “Apart from having designed new plans that are ready to be promoted and sold on health insurance exchanges (now called health insurance marketplaces), I can only make an educated guess on how payers are planning to handle the new populations of patients who will be insured as a result of the ACA. Note, I have a different view on how much the newly insured will increase the demand for medical services, compared with the conventional wisdom, which estimates that the previously uninsured will flood primary care physicians with pent-up need for medical care. I agree that physician practices will enroll many new patients in areas where there had been a large number of uninsured. However, I think that a large number of the newly insured patients will have so much experience managing their own care that they won’t overburden the provider organizations as much as some analysts predict. Plus, many of the insurance plans available to these populations will include significant co-pays (significant is in the eye of the beholder in this case!). With high co-pays, I predict that populations that were unable to afford insurance coverage in the past will not be able to afford most co-pays and will find ways to reduce their costs of care whenever possible by using retail clinics and other lower-cost options, such as telehealth.”

Laura: The medical cost trend has slowed considerably in the past few years. What can providers and payers do to help keep costs from rising?

Janice: “The best advice I have to keep costs from rising is to provide more information about costs to patients before they choose a course of treatment. Providing more information about the likely benefits and risks of treatment plan options under consideration to patients will also increase the patient’s level of commitment to the chosen treatment plan. Moving to this “shared decision-making” model will likely reduce costs in the short term, although that’s not a sure bet, since cost is not the only criterion that patients will consider.

As I consider the topics we’ve just discussed, it occurs to me that the most significant move that payers could make to slow the rise in costs would be to simplify health insurance plans so that costs are far more transparent. Some payers are ahead of others in offering data on costs. For instance, Aetna offers an average estimated cost by region in its Aetna Navigator tool. Although not complete, Aetna’s move in the direction of providing cost information is a step in the right direction.”

 See full interview at: http://www.navinet.net/blog/2013-navinet-expert-interview-series-janice-mccallum-health-content-advisors


     II. Making Health Data Healthier: How to Determine What’s Valuable and How to Use It

A dialogue between Geeta Nayyar, MD, MBA, Chief Medical Information Officer at AT&T, and me about managing and leveraging health data for the benefit of providers and patients.

On the topic of Meaningful Use:

Geeta: In your opinion, how does Meaningful Use help advance the value of data in medical research and clinical applications?

Janice:  “The Meaningful Use incentive program has jump-started the adoption of electronic health records and set the framework for coordinating a fragmented group of providers, health IT vendors, and analytics companies. The common sets of data to be collected, tracked, and analyzed set the stage for greater collaboration between providers/clinicians, payer organizations, medical researchers and patients.

...

Frankly, I wish the value of data standards and collaboration were so obvious that providers and payers would develop industry standards without external pressure. Since that wasn’t the case prior to the Meaningful Use program, I would say that we’ve seen great strides in enhancing the value of data available for medical and clinical applications in a short period of time.”

 

See full interview at:

http://networkingexchangeblog.att.com/enterprise-business/making-health-data-healthier/.

 

   III.            Webinar on Meaningful Use for Medical Librarians

I recently gave an hour-long webinar, Meaningful Use: A Means to an End, to the National Network of Libraries of Medicine (NN/LM), New England Region.  Along with providing some context to the Meaningful Use program, the webinar focused on roles for medical librarians in implementing meaningful use programs, especially elements that relate to patient engagement, quality measures, and clinical decision support.

Please contact me (janice@healthcontentadvisors.com) if you are interested in a customized version of the webinar/presentation for another audience.

Tuesday
Feb122013

Big Data: Helping Scholarly Publishers Cut Through the Hype

Last week, I had the privilege of participating in an executive panel at the PSP Annual Conference in Washington, DC. Topics included semantic technology, mobile technology and models, big data, and user analytics.

My presentation on Big Data was added to Slideshare on Saturday and started trending on LinkedIn and Twitter almost immediately. Clearly, Big Data is a hot topic!  Furthermore, gaining a solid understanding of what Big Data means and what it means to your business is important to publishers of all sorts. This presentation was customized for scholarly publishers, including the top medical publishers, and provides some recommendations for  participating in the high-growth and quickly changing Big Data field.

 

 

Many thanks to Audrey Melkin from Atypon for inviting me to speak and for moderating the panel. I look forward to your feedback on the presentation and on continuing the conversation about developments in the field of Big Data and healthcare analytics.

Tuesday
Oct162012

Data Drive Efficient Market Transactions

Much of what we do in our business lives can be reduced to a market model. We buy, sell, arrange meetings, seek funding, invest, hire, travel, …. You get the idea. All of these activities require connecting parties to each other and, if done appropriately, they result in making the best match between parties.

In many transactions, the price of a good or service is the key variable on which to make a match. In others, price may not be a major factor at all. In fact, the two economists who just won the Nobel Prize in Economics specialize in making markets in areas such as organ donations or matching medical residents with hospitals, where price is not the central variable to match.

Making Markets was the topic of one of the sessions I moderated last week at the Data Content 2012 conference. Data Content has been at the forefront of data publishing advancements over its celebrated 20 year history. In the Making Markets session we focused on the most common and well-understood type of B2B transaction: connecting buyers and sellers.

The three companies represented on the Making Markets panel help connect buyers and sellers in specialty markets: CapLinked in the investment sector, by bringing together potential funders and companies seeking funding; The Gordian Group in the construction sector by matching contractors to job order contracts; and Fabricating.com in the custom manufacturing segment by matching industrial companies with manufacturers of specialty parts.  The speakers emphasized how operating in the “neighborhood” of the transaction creates an opportunity to collect transaction-related data which in turn add more value to the match-making process—creating a virtuous circle.

Other sessions at Data Content touched on how data collection and data management are only a differentiating factor when hard work is put in to cull together hard-to-aggregate data or clean messy data. If it’s too easy to compile the data, you won’t have a defensible resource. But that’s perhaps the most distinguishing benefit of becoming a market-maker: the transactional data that is generated by the match-making process become a unique data asset that cannot be replicated.  These secondary data can be organized and used for industry benchmarks and can be fed back into the matching algorithms to build a continuous improvement loop.

As pointed out by my colleague Russell Perkins in his closing presentation at Data Content, in the era of Big Data, data produced by specialty publishers may just be the special ingredient that helps “solve the ‘ last mile’ problem to make Big Data actionable”. In particular, the trusted and verified contact information supplied by publishers can help make the final connection between buyers and sellers.  In the Making Markets session, we saw ample evidence that structured data supplied by B2B data publishers can be put to use to drive efficient transactions throughout the match-making process.