Ethical Issues in Mobile Health (mHealth) – What is ‘good’ enough?

With around 5 billion mobile phone users globally, opportunities for mobile technologies to play a formal role in health services, particularly in low- and middle-income countries, are undeniable. The use of wearable and deployable sensors, portable electronic devices and software applications (‘apps’) to provide interventions and therapies, health services, and to manage personal and patient information has exploded into all of our lives. Smart phone ‘apps’ are proliferating beyond games and information gathering.
- How ‘good’ are these technologies ?
- Are they safe? functional? privacy-protected?
- Who is monitoring them?
- What is ‘good enough’?
- Do all players/stakeholders define ‘good enough’ in the same way?
This blog is here to kick up the discussion – chime in using the ‘leave a reply’ link provided at the end of the blog entry.

Mobile Health (mHealth) offers elegant solutions to problems that health care – both prevention and treatment arms – have faced since their inception: How to get the right information where and when it is needed, how to conquer the obstacles presented by time (who has any?) and geography, how to access remote populations and help health workers in the field. mHealth offers the promise (or maybe we should say, the hope) of persistent, pervasive healthcare services that can be delivered anytime, anywhere. And mHealth also offers a new arsenal of potentially effective ways to intervene, manage and treat in health care that will actually lead to behavior change and improved health outcomes.  With the rapid increase in smart phone ownership, particularly in low-income populations, phone-based interventions have brought positive results in many groups, including low socioeconomic status and minority populations. Technology has become a part of our everyday lives – and it has high potential for improving health by assisting with (remote) clinician-patient communications, (self) monitoring, behavior modification and disease self-management. There are a number of ethical implications in this new connectivity and fast-paced technology development.  I would like to focus on three, one blog at a time. These are:

  1. How do we know which apps and technologies are ‘good’ (i.e. collect data reliably and validly, provide data feedback that is reliable and valid, and provide adequate protection of your privacy)?
  2. Who should own & control the data that your apps are collecting? Who cares about privacy, are the regulations too cumbersome to even be considered in full? What are we willing to share in order to get valued ‘goods’ – like a fast, personalized search?
  3. mHealth technologies collect and store vast amounts of data. What ARE we going to do with all that data? And what is the availability of all that data for an unspecified amount of time doing to us? What does it mean to live in a world where our past, medical and otherwise is etched like a tattoo into our digital skins?

This is the blog about #1. How do we know which ‘apps’ are ‘good’ – and what is good enough? Happtique is a company that has taken it upon themselves to “curate all of the medical and healthcare apps under the sun.” That’s a pretty tall order. But there is no question that there is a need for some kind of quality control. The proliferation of apps and technologies that could impact people’s health has spilled over into the consumer world without very much oversight. What if a technology doesn’t do what it says it will do, doesn’t measure what it claims to measure, what if it fails while monitoring your heart rate or digitally transfers your information through insecure webs and portals? And where do we draw the line – which seems pretty fuzzy – between health, medicine, and play? Do apps like “Affirmations for a Stress Free Life” qualify as mHealth? These are questions that, arguably, should be the terrain of careful research, to which any other medical/health device has been required to undergo in the past. But mHealth technologies are moving at lightening speed. The 5-year, or even 2-year research cycle isn’t fast enough to keep up with the developments. By the time that you have ‘tested’ your technology, it is obsolete. And note that Happtique doesn’t offer to test these technologies. They very cleverly have aimed to curate and certify them – not the same as validate or test –  and are hammering out a process that seems to combine expert panels and crowdsourcing – very clever. iMedicalApps offers a similar service. Here, physicians, health professionals, and mHealth analysts provide reviews, some research, and experientially-based commentary on mobile technologies, with the possible advantage that iMedicalApps  don’t make or sell apps themselves. Both approaches are more nimble than, say, the Federal Trade Commission, or the Food and Drug Administration. There are interesting discussions on the pros and cons of FDA regulation of mHealth, for instance by Bradley Merrill Thompson and David L. Scher.

So Happtique is stepping up to the plate in an important situation where it might not be ideal for-profit corporation to take the lead. Why not? Partially because they are a ‘for-profit’. Happtique also sells mHealth apps. Including “Affirmations for a Stress Free Life”, by the way. So the conflict of interest issue looms large, although, like I said up front, the need for some kind of quality control is obvious, and the angle that Happtique has taken is creative. However, possibly life-saving technologies and apps that have not undergone rigorous, evidence based development and testing pose tough problems. What we have here is an area where government agencies, funders, researchers, and industry need to find a way to come downstairs together.  Agile software development forms a triumvirate between developers, businesspeople and customers, leaving science and regulation out of the picture. We need fresh ideas on how to tackle the tension between the glacial progress of science and the lightning speed of technological development. I am hoping some of you readers will post them here!

The DSM V arriving soon: What ever happened to ‘normal’?

Welcome to my blog! My name is Donna Spruijt-Metz, and I am going to be blogging about [research & medical] ethics, because I want to create a community of engaged people who will speak in plain language to each other about these difficult and important topics that touch all of our lives. So please – get involved! I hope you read and comment, and follow the discussion and comment again! Here goes blog #1:

The American Psychiatric Association (APA) will publish the much-anticipated revision of Diagnostic and Statistical Manual of Mental Disorders (DSM V) in May of 2013. The classification, diagnosis and treatment of psychiatric disorders as laid out in the DSM has enormous public impact. The influence of DSM classifications reaches to courts of law, insurance claims and perhaps most importantly, the doctors’ office. DSM classifications are used to decide who is considered to have a mental disorder, when a person is eligible for particular prescriptions, who qualifies for insurance reimbursements, disability, the list goes on. This influence is not purely domestic. As Cosgrove and Krimsky explain in their article, the DSM influence is broad and global. So should we be excited by this new release? I have two overarching concerns:

Concern #1: Medicalization – increasingly, there is medicine for everything. And it is human, oh so human, to sincerely welcome that; we all want the magic bullet, because behavior change is devilishly difficult. But the new DSM is proposing a number of ‘simplifications’ that could have eerie outcomes. For instance, they propose to reduce the number of criteria necessary for the diagnosis of Attention Deficit Disorder (ADD). These days, one could argue that a child who is having trouble in school might have a much higher probability of being diagnosed with ADD or attention deficit and hyperactivity disorder (ADHD) and receiving a drug, like Ritalin or Adderall, than she has of receiving tutoring, or therapy, or even parenting that doesn’t involve being babysat at restaurants by a small screens while the grown-ups talk amongst themselves. And getting insurance to reimburse the drugs is surely easier than getting the insurance to reimburse tutoring, or the effort it takes to change family dining habits. Don’t get me wrong, I have nothing against Adderall or Ritalin or any of the truly fabulous arsenal of drugs that we have developed to treat real disorders. The thing is, a drug might help the child to sit still, but without offering some real strategies to help that child change his behavior, he might be looking at a lifetime of drug use. Another for instance: I know that many, many people have been seriously helped by antidepressants. But the new DSM is proposing to lower diagnostic thresholds that could lead to medicalization and stigmatization of transitive, even normative distress (see David Elkin’s open letter, and the ensuing discussion). Aside from the damage this can do to vulnerable populations, like the elderly or the grieving, I have another question: How does the prescription of anti-depressants help the guy who really needs to re-examine his life, his marriage, his job? I just worry that the ease and ‘re-imbursability’ of getting drugs as opposed to the difficult path to changing habits and behavior, can lead to a lifetime of dependence on substances rather than self. And it certainly brings in income for the pharmaceutical industry. Which brings me to my second concern.

Concern #2: Conflicts of interest – who are the people that are making decisions about categorization, diagnosis and treatment of ‘mental disorders’? According to Cosgrove and Krimsky, at least 69% of the 141 DSM V task force members have ties with the pharmaceutical industry. Now, the APA instituted a new, mandatory disclosure process – (wait, what? It didn’t have one before? Nope…) and I won’t bother you with the details of that, and the several arguments about whether or not they are stringent enough. I will cut to the chase. Herein lies the rub: Disclaimers and disclosure do not fully mitigate the effects of reward. Disclosing speaker fees and gifts and support from a pharmaceutical company doesn’t magically remove any gratitude, enthusiasm, or other human feelings. Bias can be sneaky and unrecognized. In fact, that might be the true nature of bias, that we don’t recognize it in ourselves. So, the powerful DSM labels are used to categorize and diagnose and treat are being made by people who, for a large part, might be subject to bias. As Cosgrove and Krimksy put it, we might just be moving from ‘secrecy of bias’ to ‘openness of bias’. There is certainly concern that the broader diagnostic criteria for selected psychiatric conditions will encroach upon the boundaries on what was once considered ‘normal’ – making that island smaller and smaller, medicalizing ordinary behavior and expanding the market for drug prescriptions.