VIDEO TEXT

Plain Text icon 20191112 NIH Policy Breifing EDIT.txt — Plain Text, 99 KB (102199 bytes)

File contents

- Good morning. My name is Mark Rothstein, and along with John Wilbanks, we are the co-PIs on this grant. And I want to welcome you to Georgetown, and to our policy workshop. And looking forward to speaking with you, either formally in the group, or informally afterwards. You see on the slide, the list of the project team. The ones with the asterisk are here today, and will be presenting. And there's a description of our affiliations in one of the handouts that's out on the table. Before I start, I wanna recognize the support for this grant from NIH, from NCI, NHGRI, and the Office of the Director. And also wanna especially thank Charlize for her assistance and support on this project. I also wanna thank the folks at Georgetown, who let us have their wonderful space, and helped with this meeting this morning. Matt, Katie, and Marie-Angel in particular. On the table where you registered, there are two large binders. The first one is actually the conclusion and recommendations that we're gonna present to you this morning. It's also available on our website. And in addition, we will post a video that's being taken of this session this morning, and we will let you know where it is, and when it's available. And now that I have a captive audience, I will ask for your indulgence for about 30 seconds to tell you that we are also highlighting, albeit briefly, another research project that was NIH funded, this one from NHGRI, that's called "Legal and Ethical Challenges "of International Direct-to-Participant Genomic Research." That will be published next month in a symposium issue in the "Journal of Law and Medicine and Ethics." And our concluding article is also in a binder out in front, and you can access it on our website as well. So, let's get to the topic of the day, and that is unregulated health research using mobile devices. And this is at the intersection of two important new trends in research. One is research using mobile devices, and I think you'd be hard-pressed to go to any academic medical center that wasn't doing numerous studies that made use of mobile devices for reasons that everyone knows; people have them, they have downloaded hundreds of thousands of apps, and are comfortable with them. And also, unregulated health research. That is research by people who are not subject to the federal research regulations, because it's not government funded, it's not in anticipation of a submission to the FDA, and you know all the rules. So this particular project that we're working on is at the intersection of these two important trends in research. But I think it's important also note that the things that we have observed, the recommendations and the conclusions that we're gonna share with you today, do have broader applicability; that is, for regulated researchers who are using mobile devices as well. So I'm gonna briefly describe the methodology that we used for our study. It began with qualitative interviews with 41 thought leaders from these categories: app developers, researchers, patient and research advocates, and regulatory and policy professionals, IRB, directors and chairs, and so forth. We then held a series of working group meetings. The working group consisted of about 30 people. I'll mention them specifically in a minute. And we had four meetings to share the views of the various members, who had expertise in one area or another, and we invited expert speakers to come. And the first meeting was in San Diego area at UCSD in October of 2017. And then we met every six months on a different topic. Chicago, then Atlanta, and finally this April in Houston, where we discussed our tentative recommendations and publications. So, the working group members fall into two categories. The first one we call simply "authors." It's not very imaginative, but they authored articles that will appear in the special symposium issue on this grant, which will be published in March of 2020, and there are 21 articles. And in the handouts that you can get on the way out, or on the way in if you have it, are a list of the specific topics, and the authors. So here's page one, and here's page two of the authors. And I think if you know any or many of these people, it's not only an expert group, but shyness is not something that characterizes many of them, and so you can imagine how interesting our meetings have been over time. And then there's another group that we call "discussants," which means they were working group members, came to all the meetings, reviewed texts and so forth, but they're not writing an independent article for publication, and again, this is an expert group that had many insights for us, and we thank them. Next, we had in this grant, we wanted to try some innovative techniques, and the first one that I want to ask John to talk about is the app-developer workshop that we had in September in New York.

- Thanks Mark . And thank you all for coming out on a cold and rainy morning for a pretty niche topic. So it's nice to see you all here. So I'm not a normal recipient of an NIH R01 Award. I don't have a PhD. And part of what I tried to do in this was to think about the impact that we would have. So part of these ideas are much like the workshop today, was to recognize that the app developers who will build these unregulated research apps in all likelihood do not read "The Journal of Law and Medical Ethics," and will be unlikely to follow our recommendations, or even know that they existed if we don't take more sort of non-traditional approaches to informing them. So, we hosted a free app-developer workshop at the Genome Center in New York in September. The idea of this was to create awareness of toolkits that embed many of the recommendations that you'll hear about today. So there exists a variety, whether it's from Apple, or from the Google community, from Sage Bionetworks where I work, or other places, of toolkits and application frameworks that developers can use to rapidly build medical-style research apps. And by trying to make those frameworks encode as many of the recommendations as possible, we hope to have the developers, without even realizing it, follow good ethical practice for developing research tools. So the idea was to have a workshop where we discussed those toolkits and systems, entertain a variety of Q&A, and sort of application development questions. We engaged the New York City venture capital community, as well as the New York City pharmaceutical community, in getting the word out. We've recorded those videos, put them out on YouTube. We also have a set of conversations with companies that specialize in getting information to developers about running specific promotions of the videos to make the awareness as high as possible. Just based on the idea that as part of the qualitative research Mark mentioned, it became pretty clear that to the extent that developers ignore ethics, it's not because they wake up in the morning and say, "I would like to ignore ethics." It's that they are going as fast as they can to build things, they're often cutting and pasting code that they find in other places, and so this is to sort of twist the old cliche, right, "Don't herd cats. "Find their milk and put the vitamins in the milk." And so we had about 55 attendees at that workshop. We've had thousands of views on the videos already. And this is before we have even reached the agreement with the companies we're working with to promote the videos deeply into developer-specific communications channels. So that, we think that that was a relative success. And then the hope is that the impact of that actually is felt longer over time, as these resources are found, used, and integrated into common practice. The one good piece of this is we did have a couple of the large technology companies that will be making recommendations to attend that workshop. And that has led to an ongoing conversation about how to embed into their consent products in particular, some of the specific consent recommendations we'll make today. So we're happy about that. This briefing today is the second of these ideas, again, that I worked on the Hill a long time ago; it's a different world now than it was in the mid-'90s, but I did not have a lot of time to read academic journal articles when I worked on the Hill. The idea was to try to, again, synthesize these ideas, present them, put them into a context that's available to the members of the Congressional staffs, the executive staffs, and the executive branches. And for those of you that are here, to give you the materials you can take, and then propagate back in your own daily work as you go. We've tried to be as pragmatic as possible throughout the recommendations that you'll see. In recognition of the constraints that everyone who works in or with government has to work with, in terms of the regulations we're after. But the idea was that it's important to write the papers, but we are actually trying to impact the way that we think about this, both at the developer level, and at the policy-maker level. Mark mentioned the forthcoming symposium. So this is coming out in March 2020. It will be fully open access, which means that it will be copyable and redistributable in digital form by anyone who wishes to do so, subject only to the conditions of attribution to the authors and to the NIH as the funder of the grant. Just so you can see, and this is all available on the handouts, so you can sort of see what we're looking at, we have a variety of these expert perspectives, as well as the introduction. We'll be looking at what kind of design personas populate unregulated mobile health research. How this applies in behavioral health. You'll see a pattern that, one of the things we identified through the working groups and the interviews is that there are, there's a patchwork of laws that govern research here, and that govern mobile research in particular, but it's quite possible to arbitrage that patchwork and find gaps, where there is very little effective regulation, and no traditional research regulation. So this is why you see something like, when doesn't the Common Rule apply? State research laws, FDA regulation, state data protection. This is an increasingly important intersection, where you have consumer data protection law governing movement of data, and that can intersect in strange and unexpected ways with research regs. The FTC came up as a potential regulator in this space, so we have a few articles on that. Obviously, the early adopter way that most technology is developed leads to an early adopter group of people who would therefore enroll. Early adopters are not typically diverse, and representative of the United States population; that has an impact on participation and risks. Some of the moral issues, issues around pediatric consent into research. Oversight. My own articles on consent, privacy and security. If you really want nightmares, stop talking about consent and start asking about technical security. When people copy and paste to build software, they often copy and paste in security vulnerabilities in ways that can be deeply terrifying. That was the most difficult piece of this for me as a researcher. Data sharing, which is often very popular, we're hearing a fair amount about data sharing this morning because of Ascension Hospitals and Google sharing their data together. International research is just as complicated as this, but more so. Because you then add the complexity of the same pastiche, typically applies in other countries. GDPR as well. And then ending with the giant consensus article that's out there in the binders, stamped draft, importantly, on considerations and policy recommendations. And I would note that with our group, whether it's the discussants, the working group, or even just the authors of the consensus article, there's not always consensus. Because some of these recommendations, we don't expect all these recommendations to go in, but the reality is that we had to try to cover as many different areas with recommendations as possible because this is probably going to take multiple different angles to cover over time. There's not gonna be a silver bullet that we just say, "We've passed this one piece of law, or one regulation, "and we fixed things." Michelle? This is Michelle McGowan, one of our other key PIs and researchers.

- Good morning everyone. So as Mark and John have already alluded to, there has been this growth in unregulated health research, which raises questions around definition; what constitutes unregulated research, and what do we think of when we conceptualize of health research broadly? So I came into this project with a background in studying ethical, legal, and social implications of precision medicine. And in this context, I became particularly interested in how what we would consider non-traditional researchers were trying to understand how information about themselves could be used to improve their own health. So in this context I was particularly interested in following citizen science, and participatory approaches to genomic research. So for those of you who are unfamiliar, I'll give you a little bit of a background on why we might be experiencing a growth in unregulated health research today. So first, so there's two different approaches here that we wanted to draw your attention to. So one is the enthusiasm for the use of big data to understand and promote population health. Those of you who are in the space of federal health research understand that big data is everywhere now, in the sense that both traditional researchers in the health sector, and in other sectors, non-research related sectors, are interested in compiling as much data as possible in order to understand and promote new understandings of population health. What can we use, how can data be used in aggregate in order to understand health in new and different ways? Now this idea has also spread into other, nontraditional research sectors, particularly in the space of citizen science. So citizen science has both top-down and bottom-up sort of mechanisms. Top-down approaches to citizen science would be those that involve a researcher, a more traditional researcher setting the research agenda, and enlisting the support of laypeople, or people who are not trained in scientific methods, to either collect data, to store data, or to use computing power to lend their computing power in order to promote a research agenda. More bottom-up approaches to citizen science might be those where nontraditional researchers, those who don't have scientific training or work in traditional scientific venues, are trying to design and conduct research themselves, and also disseminate the findings. So historically, the field of citizen science emerged out of what we would consider non-health-related sectors. Some of the earliest examples are environmental science, and in fields like birdwatching, for instance, to try to understand migratory patterns of birds. And increasingly, we're seeing citizen science methods being used in health sectors. Both by traditional researchers, and by those who we might consider to be laypeople who are trying to change and disrupt the way that we think about science and health. So together, the promotion of big data in science, and the extension of the methods of citizen science into health research, is fueling a growth of unregulated health research. So why might this be appealing? So there is some skepticism about the traditional approaches to health research that may be contributing to what we're calling "unregulated health research," and I'll define that in just a moment. But the first is that there is some skepticism that academic institutions, and that commercial entities that house most health research are perhaps too stodgy in their approaches, too slow, too focused on bringing in traditional research dollars, and not thinking in it more innovatively about how to move health-related research questions forward. So those who are arguing to disrupt this pattern would say that we can't just be focused on the randomized controlled clinical trial in order to move scientific research forward in the health sector, but rather we should be thinking more creatively about different ways that we can learn about health, and improve our understanding of the population's health through discovery science, which historically hasn't been as easy to fund through traditional mechanisms. Rare disease groups have argued that small populations are particularly challenging to study, and traditional research mechanisms, and so the unregulated approaches to health research might afford us more opportunity to look at smaller populations, and to aggregate data from these small populations in order to understand health phenomenon. And particularly, they are interested in thinking about how to speed along the process of research, so that unlike a study like this, which was funded by NIH about three years ago, and you're just now hearing about some of our findings now, they want to try to move the pace of research forward in a much more rapid fashion, so that the people who could benefit from the finding of the research would be able to actualize those benefits sooner. So these are some of the reasons why people want to move away from the traditional health research mechanisms that have dominated health research, historically. So, as Mark mentioned, what we define as unregulated research in this particular study, is any research that is not subject to federal research regulations. So the two that he mentioned were being subject to the Common Rule, and the 16 federal agencies that have adopted the Common Rule, as well as any researchers who are interested in submitting their research for consideration for either a drug or a device that would require reproval by the Food and Drug Administration. So essentially, we're not interested in any of these researchers for the purposes of this study. We know that there are a lot of researchers using mobile devices who do fit these definitions of a regulated researcher, but we're interested in researchers who may not fit within this particular context. So one of the challenges I can acknowledge as an academic researcher, I work at a children's hospital, is that so often we get locked into this framework, where we think about all research as being regulated research. So if we work in academic medical centers, it's hard for us sometimes to think outside of the box. So isn't everyone always gonna be regulated? No. In the context of this study, we found that there are actually quite a few entities that may fall outside of these particular regulations, but other regulations may apply to them. So some of those that may not be subject to these regulations include independent researchers, so researchers who are not affiliated with any organization that would be subject to the Common Rule, or would be filing it for FDA approval. Citizen scientists, as I mentioned before. Laypeople who are not trained in science, or working within the traditional scientific enterprise. Patient directed, or patient driven researchers, or some may refer to these as patient advocacy organizations. But sometimes they're just individual families. We're seeing this more and more, that individual parents are trying to find ways of learning more about their children's conditions, a rare genetic disorder, for instance, and may put it out to the world to try to understand this rare condition that their child may be experiencing. So these particular researchers, some people would quibble with whether or not they're researchers if they're on a diagnostic odyssey, for instance, but they're trying to understand their child's health condition, and may not be subject to any of the regulations that we've described already. And finally we have also identified that people who are doing self-experimentation in order to try to improve their own health may fall within this category as well. And there could be others too. So these are just the ones that we've identified, and we'd welcome any feedback on whether there are other groups or individuals that you think might fall within this as well. So some of the tools that these types of researchers have used in the mobile health device research realm have been the use of health apps for mobile devices to collect biometric and passive user data, and my colleague, John, will talk a little bit more about some examples of that. Some are using direct-to-consumer genomic testing, and for those of you who are less familiar with this space, the companies that offer direct-to-consumer genomic testing offer the opportunity for consumers to download their data, and then they can reload, upload it, into other spaces, including open-source data platforms for further analysis by larger communities. The use of publicly available datasets can be used by unregulated researchers in order to try to understand population health in new ways. And crowdsourcing as a tool has been utilized by these researchers, among many other types of groups. These days crowdsourcing tends to be a popular way to try to aggregate information and understanding about a particular phenomenon. And finally, the use of social media through mobile devices has been used to promote translocal engagement. And by that, this has been particularly useful for groups that may not, for rare disease groups, for instance, who have less face-to-face contact with other people in the same situation, but it's allowing for more cross time and space collaboration to understand a particular health phenomenon. So at this point I'm gonna turn this over to John, so he can give us some examples.

- Thanks Michelle. So, I'm gonna zoom in on the mobile part from everything that Michelle talked about, because it's one of the sort of most important pieces that the research pulled out. So the goal for mobile devices and health is that you can move beyond the tracking that you get when you go to the health system. And so I'm gonna go through an example of an app that we built at Sage. So we're a nonprofit, we receive federal research funding. Therefore we are regulated. But one of the points I'm gonna make is that anyone could build an app that looks exactly like our app, and it would look as if it were regulated, but it wouldn't be. But this goal is that you can go for something like Parkinson's disease, which is an illness that is very variable day-by-day, week-by-week, month-by-month. And so when you go into the doctor, the doctor doesn't know if you're there on a good day or a bad day. And it makes it quite hard to understand, even in a research context, even at a once-a-month level. But if you have the phone, which goes with the person every day, you can really get beyond this kind of insular, periodic data collection. So mPower, which is the first study we released on this, used the phone to measure Parkinson's, because the sensors that do landscape and portrait measure tremor. The same sensors can measure gait. You can use the screen of the phone to measure motor initiation and dyskinesia. You can use the microphone to measure hypophonia and other elements from the voice. And you can even use it to gauge memory. And you can do this every day or multiple times a day, that lets you start to see the daily life of someone with Parkinson's. And you can also radically scale. So again, we're a small nonprofit. The core funding for this came from the Robert Wood Johnson Foundation; Paul Tarini is here from them today. We were able to enroll 16,000 people in the first six months, which, for comparison sake, was an order of magnitude larger than the largest comparable study funded by the NIH at a significantly higher dollar rate. And this is kinda the promise, this is why people are moving to this kind of approach is that you can radically increase the sample size, at least in theory. We also have a pre-printout that quantifies just how many people dropout, which is correspondingly high for what it's worth. But you can sort of see that in these apps, that there's this promise of both daily tracking, and of radically larger sample sizes. And what it can do is really increase what you can see in the lived experience of someone with a condition. So one of the core tests in Parkinson's is for dyskinesia, and you'll have a clinician stare at you while you tap on the table. And they'll maybe rate you on a one to five scale, one being low, five being high function. And if they're really advanced, they'll videotape you and count the number of taps. What you get on a screen is all of these other features beyond the number of taps. And what that does it let you start to untangle how drugs work for different people. So on the left you have an older gentleman, where the number of taps, you can't quite read this. The number of taps is by far the most significant element of L-Dopa for this individual. But we have a woman on the right, where the number of taps is down in the middle. So under the sort of more traditional measure, you would not see that she had any benefit from L-Dopa. The physician might decide to take her off of the L-Dopa, but the reality is that her benefit was not in the raw number of taps, but the accuracy of her tapping; it was defeating her tremor. And that would have been invisible in the traditional measure, right? So this is like, the sales pitch for mobile research, is this kind of dimensionality. And then on top of that, it lets you form this in of one hypotheses. So this is one individual over time from left to right. Sort of the long up arrows are days where the number of taps went up, right, from morning to afternoon, after taking L-Dopa. somewhere in the middle, and you see sort of a nice consistent along the left, where every day the participant is getting pretty good benefit, or measurable benefit, from the medicine. And in the middle something happens, where you go from sort of predictable, if variable, upward benefit from the medicine, to days where the benefit goes down, where there's actually less tapping after the medicine in the middle. And then we go back towards the right to a more consistent. Something happens here in the middle, which is very relevant to that person. And so part of the other benefit of this, in theory, is the ability to return more personal benefit for enrolling in a research study. Because typically when you enroll in a research study, the benefit is to the collective, and it takes a long time, the risk is personal and to you. And the idea, the hope of this is at least that that benefit can be more personal and more immediate, as Michelle noted in her comments, this immediacy and the exclusionary aspects of traditional research, that this might be a counteractive. So this is sort of the sales pitch and the idea behind all of this. Kyle's gonna come up and talk about the benefits and risks of this though, because all of these things come with trade offs. I'm just gonna sit here,

- Sure.

- because I think I'm up after you.

- Okay great. Hello everyone. So we've talked quite a bit about the benefits, potential benefits, of unregulated research. A lot of these researchers are interested in using their own funding stream. So, you know, if you are an advocate for a rare disease, you want your disease to be studied, but the NIH may not prioritize your disease the way you want. So you're able to take your money to the research, and say, "We have an organization, "we want to do our own research." Or, "We want to pay you to do our research." There's, obviously, new methods. We were just discussing that. And a focus on different topics that have not been studied as much in the past. There's a big focus on democratizing research, just as in other contexts in this country, sometimes it's unclear exactly what we mean when we talk about democratizing. But in this context, folks really mean that they are getting to play a part in this, and they value that as an inherent value of doing this. It also expands the base of researchers. A number of the folks that we spoke with at our first working group in San Diego were folks who had an interest in their own disease, or a disease that their child has, and have a skillset relevant to that. A lot of these folks are folks who work in computer science, who are, they work in data science in non-health related areas. And when they become interested in a particular problem, they want to solve it. And so we're really bringing in new skillsets from folks who typically would not be involved in health research, or would feel excluded from that. I just wanted to mention the Crowdsourcing and Citizen Science Act of 2017. Many of you are potentially have benefited from this law, which basically just allows federal agencies to utilize resources like crowdsourcing in order to achieve their aims. And the law, itself, provides some of the benefits that folks might really like about this approach, one of which is that it can be relatively inexpensive. So, a federal agency, or maybe a rare disease organization, wants to get feedback from a large group of people, a large number of people, about a particular problem. They're able to do that very quickly and inexpensively, often using completely free tools to get feedback and insights. There are risks of harm, however. And I just wanted to start off... This is probably the least intuitive piece of this, but it's also the most important, which is that even though, you know, we all carry this device in our pocket, and we generally go through our day and don't really recognize any harms, there are harms that can result from use, especially of apps focused on health, which then transfer, also, into using those types of apps for research purposes. One is that these apps often provide feedback, or what many folks in this field call "insights," that are erroneous; they're based on bad information. Or they are sort of released into the world before they've been vetted effectively, and turn out to be false. So, for example, you know, someone using mPower might start using the tool, and get the false impression that they have Parkinson's disease because of the types of information that's being returned to them. But maybe in fact they don't have that. It can also sort of influence medication dosing, if someone is using a tool like mPower, or some of these other tools, they may start to see a benefit from taking more of their medicine, and so now all of a sudden they decide, "Well, I'm gonna take more of it." So you can sort of influence behavior in a way that creates harm. Another one that's sort of related is that apps can influence behavior, basically causing people to over-focus on certain issues. So, for example, there's some evidence that sleep trackers can create anxiety around sleep, and sort of obsessive behaviors around getting, quote, "healthy sleep," which, ironically, leads to unhealthy sleep patterns. This same effect can occur in diet, where folks are really focused on healthy eating, but they become so focused on healthy eating that their habits become unhealthy, in that they sort of take it too far. So these are important risks. They are things that we tend not to think about, and they're things that many tech companies really would not acknowledge that this is a possibility, but it does happen. And it's not just that folks make bad decisions, and the producer of the health app is sort of not responsible for that. There are definitely things that the app developers can do that make these harms more likely. And so part of the problem here is ensuring that the folks that are creating those apps are encountering along this path different kinds of input that would help them make better decisions that would create less harm for their users. Some of these other things I'm sure many of you already are well aware of. We're very concerned that disclosure of private health information could create dignitary harms. When we install an app, we authorize the use of certain permissions on our phone, but of course, none of us read those permissions. And so there are many examples of health-focused apps, and also health research apps, including permissions on the device, that are really, obviously, not necessary for the function of what the user thinks they're getting out of the app, and can lead to tracking various kinds of sensitive information through the device. And then, economic harms, which is closely related to dignitary harms. Some of these effects can not only sort of cause me to lose the feeling that I now have kept private information that I wish to keep private, but also that information is being used to harm me through things like identity theft, or folks accessing text messages, photos, videos, and then even more kind of immediate, credit card data and other personal data. And then, I do just want to draw attention to societal harms. We, unfortunately, don't pay enough attention, I think, to this phenomenon of group harms, which is, in small, subtle ways, individual harms add up to an effect on a group that was not intended. And bad research can cause groups to be sort of, reinforced stereotypes, and create stigma for groups, or reinforce existing stigmas that harm whole groups. And also, bad science can come out of this work. So, you know, an unregulated researcher doesn't go through the conventional IRB review, which, as you know, also includes a dimension of evaluating the science, and whether the usefulness of the science justifies the risks, but also the funder review and peer review to publication, that all plays in to making sure that the information that gets disseminated from our project is likely to actually help folks. And this can be sort of magnified in this area by distribution over social networks. So these are folks who would be less likely to publish an article in a peer-reviewed publication in which what they write gets some peer-review. They're also more likely to disseminate this over social networks. I'm not sure if you all are aware, but false information does get propagated over social networks sometimes. So this is a particular risk here, that insights from a poorly-designed device leads to false, or incorrect conclusions, that people think is important for health, that gets then disseminated over social networks, and many, many people see that, and now they've adopted changes in their health that is creating harms. So, a number of risks there. And now we're gonna get to the meat: recommendations.

- So , you all have a chance to see a handout that has a summary of our recommendations. What we wanna do now is to discuss how we came up with them, why we came up with them, why we didn't do other things that might be considered options. And when we go through that, then you'll all bee totally in agreement with all of our recommendations. So when we thought about what we should recommend, the first thing that we needed to discuss was whether we wanted to recommend extending the Common Rule to all research and all researchers. And the argument would be that the Common Rule is best suited to safeguard the welfare of research participants, that the US is an international outlier in coupling funding with research regulations and research laws, and that we should recommend that we have this in the United States. We did not think that was a good idea, because I think our experience over the last several years made it quite clear that there was little political support at the present time for doing this, and it would be, in my judgment, injudicious to recommend that, given the fact that it has no chance of going anywhere. So our view was , that's out, okay. The other option is we could maintain the status quo, and the status quo, with regard to unregulated research, is no regulation. A laissez-faire, hands-off attitude, with regard to all the categories of researchers that Michelle went through. The argument could be that we should not design a solution that doesn't have a problem, and there have been no major adverse events yet, or that we know about. That if it were somehow regulated, or focused-on at the federal level, that these researchers would simply go to their garages and not be heard from, or some people would just give up on the idea of doing research, parents of a child with a rare disease, and that would be a bad thing. We rejected that as well. We think there are certain problems associated with unregulated research that are likely to get worse in the future as it becomes more common, and as people are exposed to various types of research enterprises. So we rejected both of these extreme positions that we cover everything under some new version of the Common Rule, or that we do nothing. And so what we opted for was a middle ground approach, based on pragmatism. What can we do? What's feasible to do? What will work if we do it? But the question is, how can you make people who don't wanna do it, do it, when you've got no leverage over them, and almost by definition, they're resistant to the idea of what you're trying to sell them? Good question. So our answer was sort of in four parts. Number one, we want to not so much establish the boundaries, but make sure people understand what those boundaries are. And we already have in place certain mechanisms to deal with this. First of all, the FDA has jurisdiction over a subset of mobile devices that we'll talk about in a minute. And in addition, the FTC has jurisdiction over false, misleading information being spread about certain products and so forth. So we have laws, and we need to emphasize what those laws do. The most important thing that we are recommending, however, is to provide education and assistance to researchers who are not professional researchers, to possible research participants who many not receive the information that they ordinarily would receive through the informed consent process and the like. Our goal to get this into place is first appeal to the self-interests of those parties, and that was one of the purposes of the app developers workshop that John mentioned. And that is to say, "Look, there is already off-the-shelf technology "that will make your product actually better "and more acceptable. "And you can do it at low cost. "And these good things will result." The other thing that we can do is try to make it easy for researchers and research participants and everyone involved in unregulated sci. So I just wanted to mention, as I go through all the recommendations, on this slide are all the recommendations for revising the federal research regulations, that is none, okay. So now we move to states, and we actually do have recommendations for state governments. There are six states that have state research laws. The last four on the list, New York, California, Illinois, and Wisconsin, they are what you might consider to be laws that deal with traditional, clinical research, that really would be inapplicable here, right? Because it's not in a healthcare institution, it's not interventional, and so forth. The two states that are relevant to our discussion here are Maryland and Virginia. Maryland's law we like better, because it applies the Common Rule to all research conducted in the state. Now, you may ask, "What does that mean in the state? "Is the researcher in the state? "Are the participants in the state?" There are lots of issues surrounding how it actually would apply to these situations. But the advantage of Maryland over Virginia in our view is that you wouldn't suddenly have, for example, if Virginia were replicated in every state, then had their own research laws, you'd have an impossible collection. So our choice would be Maryland, even though we're not convinced that that's the way to go. So states not currently having these laws, that means 48 states, might want to consider enacting a comprehensive law to regulate all research conducted in their state. We recommend the Maryland law. And states also should consider extending the application of data breach, data security, and data privacy statues to all mobile device mediated research, because these would not be normally covered under the HIPAA Privacy Rule, and therefore the breach notification requirements would not be applied to them. Okay, next comes NIH. And I can tell you that the set of regulations, or recommendations for NIH, were very enthusiastically debated by our group. And I will sort of run through the arguments that characterized our discussions. First, the argument would be that NIH is a terrible place to regulate, because NIH epitomizes the research establishment. Citizen scientists and all of these other people that we talked about are anti-establishment; they're gonna rebel at the notion that NIH has anything to do with this, whether a role or the lead role. NIH also has a compliance role. Somebody told them that people who are spending millions of dollars of taxpayer money ought to be accountable in some ways. They ought to do what they're promising to do, and not misuse the money, et cetera. So NIH has various ways that they check up on people. And just the thought that there would be any sort of compliance role associated with this, even though NIH still technically wouldn't have jurisdiction over them would make people unhappy. And also a point that John shared was, and that is that app developers would find it curious to look to NIH for any sort of support or guidance, or as the source of information. Those are the arguments against having NIH involved in this. Well, the arguments in favor of having NIH are the following. NIH already has numerous programs promoting novel research strategies. They have various programs of scientific education, and training, and workforce development. What we are proposing would not be totally alien to the culture of NIH. And NIH already has an interest in mobile health research, and citizen science. And John and I had the opportunity to have a very, I think, fruitful discussion, with a citizen science working group at NIH about these issues. And as a result of considering these arguments, we opted for the pro arguments. We think NIH has a unique and important role to play here. And in particular, we would start out by recommending that efforts to assist unregulated health researchers should be centralized, and increased at NIH, and we will leave to the decision-makers whether it's starting a new office of unregulated health research, or designating an existing center or institute to be responsible for it. But we think it would be much better if it were coordinated and located at one discreet place at NIH. We also think it would be very valuable to appoint an advisory board of diverse stakeholders, which would include unregulated researchers, patient advocates, app developers, et cetera, who could not only give the appearance of lending their expertise, but in fact lend their expertise, and suggest ways of reaching their fellow contributors. What we are stressing is education and consultation, and a source for information. One of our proposals that we introduce to our working group meeting said that "unregulated health researchers "need accessible, consolidated, updated, "and curated information," blah, blah, blah, and you can read it. And someone from the group said, "Well, regulated researchers "could use that information as well, "why are you singling out unregulated researchers "for this bonus?" And my answer was, "Our grant is not about regulated researchers, "it's unregulated researchers. "And we think they would benefit from this. "Perhaps after this recommendation is implemented, "and there are positive things that are associated with it, "maybe some of the aspects of that "can be carried over to the regulated researchers." But this is designed for unregulated researchers, and I should provide technical and understandable information about some of the technologies that are involved in mobile health applications. And in particular, we say that a new website should contain some of this information: Who's covered by the existing laws that we're talking about; the Common Rule, FDA, FTC, et cetera. It should accumulate and publish best practices, and ethical principles that would apply, that have already been developed elsewhere. It should include a directory of open source tools, maybe sample consent documents and security information, as well as resources for technical assistance. NIH should fund studies on unregulated mobile health research to determine the most effective ways of encouraging compliance. I don't know what they are. I don't think anybody does. It's a very important area for further scientific study. NIH should, in consultation with OHRP, work with citizen science groups, and other organizations of unregulated researchers, for support and for educational programming. And then, probably the most controversial of all of our NIH recommendations, maybe the most controversial of all of the recommendations in the entire package, is that NIH in consultation with OHRP should consider the feasibility of establishing or supporting cost-free, independent research review organizations to provide advice to unregulated researchers. And at the same meeting, one of our colleagues said, "I think this is a terrible recommendation, "because what you're suggesting "is a watered-down version "or IRBs or IRB lite." And you all remember when that discussion took place. And we shouldn't do that. And my response was, and is, if I have the option of providing some independent ethics input into researchers plans, who don't know anything about informed consent, who don't know anything about privacy, who don't know anything about all of the other issues that we deal with in research ethics. If I have an opportunity to provide that information, why wouldn't I do it? The only option is doing nothing. Doing nothing is not an option in our view, and so we think it's something that we ought to try. Maybe it won't work, but I think ought to be try. We have recommendations for the FDA as well. And the background for the recommendations is that FDA certainly has jurisdiction over a small slice of mobile devices, and the FDA, along with ONC, FTC, et cetera, has published an online decision aid to help app developers, and we think that ought to be encouraged. So our recommendations continue this inter-agency collaboration, increase it's engagement with the app developer community. As some of you may know that in September of this year, FDA came out with a whole new set of guidance documents regarding mobile health and outreach and education to the community, would certainly be valuable. And the FDA should require developers of mobile health apps to be transparent and make sure to the extent possible that users of these apps and devices are informed about the contents. Recommendation 3-4 on this slide really gets into the weeds about some of the specifics that the FDA ought to do, and these are explained in more detail in our longer document. We also have recommendations for the FTC, and the CPSC, the Consumer Product Safety Commission. Both of whom have some jurisdiction over these issues. Remember that the FDA's jurisdiction is quite narrow, but harms that occur from the use of mobile health apps may be under FTC jurisdiction, especially where claims are made that are not based in science, and in individuals are harmed, the FTC has brought a number of compliance actions against bogus cures. And the Consumer Product Safety Commission is getting involved in regulating the Internet, which would include mobile health used for clinical purposes, as well as research. So we say the FTC should increase its efforts to encourage self-regulation of mobile health researchers by providing guidance. It should promote privacy, transparency, and fairness, with regard to mobile health research; increased targeted enforcement as necessary. The FTC should develop and provide educational materials for consumers. The Consumer Product Safety Commission should increase its surveillance and monitoring of the software and the apps that are used in Internet connected devices. We also have recommendations for CDC. And people might say, "Well, what does CDC have to do with this?" One of the problems that we noted was a serious lack of information concerning how much unregulated research is going on. Whether there are any harms that are happening to individuals. And we think that CDC would be the appropriate agency to do this. So we recommend that CDC should work with NIH and other entities, to establish the prevalence and nature of unregulated health research using mobile devices, and to monitor it over time. CDC in consultation with NIH, OHRP, and perhaps private foundations, should then develop a system for compiling and reporting data on adverse events. And one interesting theory is that CDC is interested in adverse events, accidents, harms that are caused by, for example, doing texting while you're driving. Well, this would be another kind of harm that would occur as a result of using mobile devices. In this regard, for health research, when perhaps there was no good basis for them to do that. So CDC along with the collaborators that we mentioned, should then get this information, and from it, issue reports and recommendations to promote ways of preventing, or lessening, these kinds of events. Now we have some other recommendations that John's gonna go through.

- So continuing with the pragmatic goals of this, we decided to recognize that, in particular, Apple and Google, because they maintain app stores that govern 99.9% of the mobile devices, mobile telephones in the United States, have an important soft power regulatory role to play. And indeed, already play a soft power regulatory role when they exercise discretion over what is and is not allowed in their app stores. As a reminder, the app that lets you avoid tear gas in Hong Kong was banned in the Apple App Store, after protests from China. So clearly Apple is willing, Google is willing, to exercise discretion over what's in their app stores for certain purposes. And so part of the idea is to encourage them to exercise that discretion on behalf of the consumer in this context. And so, just for a little bit of reference, it's really important to understand how open-source software intersects with, sort of the dangers that Mark just talked about, especially the FTC dangers. So Research Kit, which is the Apple application framework for building a health research app, is open-source. This is actually a big change from Apple to make it open-source. Is has driven the development and release of, you know, somewhere in the 75-100 research apps over the last five years. It's hard to track because there's not an accurate count available that we know of. But the important part of this is, A, it's easier for a traditional researcher, or a ethical behaving researcher to grab the open-source framework and make an app that looks professional, without having to engage a professional designer. So if you've ever used a PowerPoint template that makes your PowerPoint look fancy, it's the same concept. It's not full of content, but it's structurally complete in a way that makes you look like you really thought about your PowerPoint design. There's a similar version of this that was created by the Android community for Google called Research Stack. So this comes out of Deborah Estrin's lab at Cornell, and this is part of what won her the MacArthur Genius Grant earlier this year, was her work on this conceptual work. But it's quite easy then to say, if you have, for example, I was looking this morning in the Amazon Bookstore on health, there are crazy diet books there about how all you need to do is eat meat and salt, to be healthy. It would be trivial to create a research application that looked extremely professional, did not involve consent to disclose risks and benefits, had no independent review, but looked precisely like an app developed and deployed by Harvard or NIH-funded researchers. So this is the core risk here, is then you can sort of say, "Not only is my book telling you to eat meat and salt, "but it is supported by clinical research." And so it's quite easy to make these clone, and copycat, and lookalike apps. And in many ways asking the FTC and others to pursue these at a general level is quite difficult; that's why the recommendation is for targeted enforcement. But the app stores have enormous power. Anyone who's ever uploaded an app to an app store knows that they will reject you based on the pixel density of your icon. That's how detailed the review is for both Apple and for Google. So these are recommendations that recognize that power. So Apple already requires a signed informed consent document for anything that uses Research Kit, so we recommend that Google joins that for applications emerging from the use of the open-source Research Stack framework. One thing that was mentioned when Research Kit was first launched was, so Apple does require IRB approval for Research Kit apps. This was a change that came about a month after they first launched Research Kit, after a raft of negative publicity. Their original was that our terms of service for developers require you to obey every relevant local law. They said that that was enough to require IRB approval when necessary. They've changed that, and they actually require this, whether it's required by law or not, Apple requires an IRB approval. Google does not yet do this. Neither of them require documentation. So one of the most important recommendations here is: upload your stamped approval letter. We have applied for many of these at Sage. It is a document. It is trivial to upload it alongside your privacy policy. But it would create a very powerful soft requirement to get some form of external third party review. To be clear, one of the benefits of this is that most of the developers that we've talked to don't wake up and think, "How can I be evil?" They're simply overworked and going as fast as they can. There is a small set who do wake up and say, "How can I be evil?" And that's really what the FTC recommendation is about. This is to catch the people who just don't really understand that there are reasons for consent, and independent review, and some of these other elements of traditional bioethics, and these are just some different ways to instantiate them. Another piece of the recommendations for Apple and Google is, and this is something that Megan Doerr, who will help lead our conversation, is an expert on, is that other legal documents intersect with the consent form and the protocol. In particular, the privacy policy and the terms of use for an app, which are typically required, may contain language that is in conflict with an informed consent or a clinical protocol. Often, for example, if you were to integrate a FitBit into your unregulated mobile research app, the FitBit data is governed under the FitBit privacy policy. So that data exists in two copies, one of which is at FitBit, which is of course now Google, and one of which was inside the study, and you can only govern the copy inside the study with your document. So the idea is that for these kinds of apps, Apple and Google could quite easily define and enforce a minimum set of requirements for privacy policies and terms of service. For example, saying that these policies could not transfer data to third parties without specific, explicit further consent. This is in turn with the education, and sort of capacity building is that to the extent that we push people towards the existing standard toolkits, these toolkits do contain capacity for a variety of validated methodologies for informed consent. From a workflow perspective, the user-centered design perspective, and a running open-source code perspective. And then the idea being that we could accommodate independent review when required by the app store platforms; this might connect to novel forms in independent review that Mark mentioned. But in particular what it could do is allow for the isolation of malicious code elements. So this would be software code that has chronic security failures, like the heart bleed failure. Or code that was designed to look secure, but sort of intentionally leak information. This is often the code that automatically takes your camera and your text messages, and the rights to scan your email, and so forth. This would allow for CDC-style surveillance, identification of code elements, and then bringing down apps that had those code elements until they were paired. Third, this concept of a software bill of materials, which is the idea that we know what's in our food, we don't know which code elements are in our apps. This connects to the second recommendation that if we had a bill of materials for every app and we found malicious code elements, it would be easy to then, say, replace that code element with a different material that achieves the same goal. And then fourth, this is coming from the privacy community, both from the Patient Privacy Rights Organization, and DiMe Digital Medicine Society. Which is, publish a nutrition label, if you will, that discloses all the key elements of the privacy policy in terms of service, in easy-to-read, human readable form, much the same way that you don't have to look at the ingredients of a candy bar, there's a label that tells you what the nutrition information is. This is the relationship of these elements to each other. There are a variety of proposals for the bill of materials, and the privacy nutrition labels to choose from. We do not recommend specific ones, we simply say that one of these options for each would support many of the other recommendations that Mark already talked about. So again, very pragmatic, but recognizing the fact that there is, more or less, full monopoly over these apps, that only require two players to make changes to have a significant impact. Wearable devices. Again, these are often going to be connected to research apps, or, in fact, be the heart of these unregulated research apps. One of the things that the interviews brought out was the trade off between encryption and computation power and battery life. So one of the reasons why the data that sit on a wearable are almost never encrypted is that it kills battery life because it requires intensive computation on the device. That means that the data are typically unencrypted on the device, they're unencrypted in transit to the mobile phone, they're then unencrypted on the mobile phone, and unencrypted in transit to the device owner. The only exceptions to this are the extraordinarily well-funded ones who have had a breach, right? Even the extraordinarily well-funded ones that haven't had a breach often don't encrypt. So this recommendation is often not going to be tractable for a wearable device maker. So one of these recommendations is federal and state investment in encryption research and development, so that this is easier to do. The reality is that there's not been a lot of research at the federal levels, this is probably more of an NSF kind of grant to figure out how do you make really lightweight encryption possible without destroying battery life on the device? Otherwise, these will not ever be implemented. Security once the data leaves the device and goes to the phone. We found patchy-at-best adherence to basic cybersecurity. Not even like, fancy cybersecurity, just basic cybersecurity principles, about, again, encryption at rest and in transit, leakage to other apps, how easy it was to connect these through basic processes that didn't even get noted as preferences in the operating systems of the phones. This again goes back to the way technology is often developed, which is in a very sort of resource-constrained, move fast and break things, and get early adopters approach, means that even if we got the best possible governance from a privacy perspective, the security loopholes are such that it's trivial to access this information in obviously, in an attack mode, but often it simply leaks. And so it's like looking at environmental policy from the '50s, when it wasn't even necessarily that you had to intentionally leak your fluids into the river, it was just an accident in the way that you processed things. And so this is something that, again, can connect back to the recommendations you've already heard, but Google and Apple as platform providers of app stores have unique power to make this part of the review process in a way that can help increase confidence from the consumer perspective in the tools. We have another set of recommendations for organizations of unregulated researchers. So Michelle did a very nice job of breaking down kind of the ontology of unregulated researchers. There are quite a few existing organizations, as well as emerging kinds of organizations, that represent vectors for education, capacity-building, novel forms of independent review. And so we wanted to make sure we called those groups out, because we think they have both a very powerful role, as well as an obligation to engage in this conversation. So you see this is the citizen science community. They have massive meetings. There are mini-startup companies, as well as startup nonprofits. You see patient cooperatives in this space. Lots of really interesting experimentation with business models. This is one kind of such organization. Another organization which Michelle mentioned would be the patient advocacy organization. There are thousands of them in this country, frequently focused on rare diseases. This is a new version of one called The Light Collective, which is interestingly formed around a genetic mutation. These are mainly women with mutations in the BRCA breast cancer gene who have had preventative mastectomies or preventative double mastectomies. They have formed a novel collective in response to realizing that their private Facebook group data were being sold to advertisers in ways that were fully legal under Facebook's terms of service. This is an example of a group that is doing a lot of self-directed research as well. I could mention Type 1 Diabetes and self-tracking of their glucose, right? So it's not just the traditional, you know, Michael J. Fox Parkinson's Foundation, it's these emerging social media connected groups that form around very, very specific elements of what they're doing. But they're a lot of peer-support. And the thing is that peer-support is a place for education and review in its own way. There are also emerging groups that are centered around physical locations. So the DIY, do-it-yourself biology community. These are groups that are building collective laboratory maker spaces. So if you've ever been to a 3D printing maker space, or if you've ever been to a Kinko's. Kinko's is a collective hardware place where you can go and do printing if you don't have those resources. Imagine a Kinko's, but for biology. This is what the DIY biology community is building. They have speakers, they have educational series. In their more extreme cases you see some of, they call themselves bio-hackers, will inject themselves with novel treatments on stage. This is increasingly, thankfully, fading as a tradition. And indeed, in the most recent DIYbio Global Conference, just last month, they called for more self-regulation, more attention to these issues. So these are very powerful groups to reach out to, because if what we're trying to do is education, we have to get to as many different locations for that education to happen as possible, and we have to make sure that our content is usable in all of those different contexts. And I would mention, for example, DIYbio is extremely anti-regulation as a field. And so the fact that they're coming around to thinking that maybe something is necessary is a sign that we can look to of what's coming in unregulated health research. They have been out attacking regulation for a decade, now they're starting to see the reasons why that might actually be useful; we wanna get ahead of that curve here. So in many cases we think these organizations will conduct studies on unregulated environments. So they need to be communicating some of the things we're talking about. Guidance, right? Including the idea that you don't just talk about the goals, that you talk about the risks, that you talk about the reality of the potential benefits, that you focus on data handling procedures. One of the core risks here is not, again, intentional attacks on privacy, but intentional data handling problems; unpatched servers and similar efforts. So that people who participate in this kind of community-oriented research understand in the same way, or a similar way, that they might understand enrolling in regular research. Guidance, including default templates that are open-source, for privacy policies in terms of service. These people will probably not have the money to pay attorneys. So giving them good privacy policies is one of the best things that we can do, because they can then simply copy in the good stuff instead of the bad stuff. And the guidance on how to evaluate third-party devices they might want to integrate. Many of the most interesting studies that came out of our discussants, and working group, and key interviews, is the desire of many of these groups to integrate multiple wearable devices, some of which might be subject to FDA approval, like a continuous glucose monitor, some of which might not be subject to FDA approval, like an open-source crowdsourced continuous glucose monitor. You know, must less awareable. And so knowing how to read those third-party terms of service, what to look for, is guidance that would be really valuable there. And now we're done with hitting you in the face with all the recommendations. Can I ask Kyle and Michelle to come on up. And Megan Doerr, who is one of our key scientists at Sage Bionetworks. The developer of the informed consent for almost everything we do, is going to guide us in a discussion. And we're using the microphones so that we can get this all on the record.

- Yes, and you should remember that this is being recorded, and will be put on the Internet. If that affects you at all--

- You may want to find us at the coffee break.

- Exactly.

- Yeah. I think this, oh yeah, here we go. This is working, good morning everyone. It's a pleasure to have you here. So I wanted to open the floor for a discussion, for questions, for comments, about these recommendations. They are wide-ranging, and I think implicate everyone in the room. So , what are your thoughts? Does anyone have any comments or questions to just start out our conversation this morning?

- [Darby] Hi, I'm Darby with Public Responsibility in Medicine and Research. And I was wondering about data aggregators. And I don't know enough about apps, and how much the data remains siloed. But for something like Google, who now owns FitBit, who can see your search data, I'm wondering how much access they have to the app that's coming in from, or the data that's coming in from the apps that they're associated with. If there were any thoughts about how to address certain companies having access to a really substantial amount of data on one person.

- We did not take up those issues in particular. They're very important issues. That's another 10 year study I think.

- And I think to the extent that Google, let's take Google as an example 'cause it's in the news today, and last week, and every week it seems. That's a group that really is gonna pay attention to the regs to the extent that they can. I think they're very good at arbitrage, legally, to make sure that they can sort of be governed when they want to be and not when they don't want to be. So I think to the extent that Google runs these kinds of apps, or that FitBit Google runs these apps, then I think we have to think a lot about aggregation. But that's probably more under antitrust, and sort of general federal emerging privacy law, then specific to the research itself. But I think the issue is that if data escapes the research regs and it's available to an aggregator, then it will be aggregated. And we saw this with Google's DeepMind acquisition in the UK. They started with an independent review board. They fired the independent review board. They started by saying they wouldn't integrate. They integrated. They closed the business unit and folded it into Google Health. And they simply paid the fine. Because it was cheaper to pay the fine and keep the data then to give the data back and be in compliance. And I think that's what we would expect in the absence of change.

- [Anita] Hi, Anita Samarth with Clinovations Government Health. I know that the conversation today focused specifically around much more consumer-type generated data. But the world that we work in is that medical data that now is available to patients, right? So there are certainly laws governing encryption, transit, and it's storage between two health providers, but once it arrives to your patient's computer or your app, it's no longer subject, and you can send things via email. So I am curious from the panelists, did you intentionally sort of segment, and do you feel like there's anything different about the data that maybe originated from a medical record that was then subject to more regulations that now is in the patient's... Is that just another form of consumer-generated health data as you start to consider models we're seeing across the country; who can have access to your data and for what purposes?

- Well, I think you raise a very important privacy question, and that is what privacy rights individuals have to retain their health information, or any other information that they have. And it's often very surprising for people to realize that as a condition of employment, as a condition of life insurance, of a condition of getting a mortgage, or a variety of other things I could list, you need to sign an authorization disclosing that information to a bank or insurance company. And it's perfectly legal, but the result is that you really don't have any privacy left if you can be compelled to give it up as a condition of getting things that are really important, or essential to your life.

- I'd love to hear Megan's take, actually, on that.

- [Megan] So I think that this is a really important question for us to consider. So there are many intersecting elements here. First, any patient, any person, has a right to all of their data; all the data about them. And they have the right to do with that data what they choose. And regulators need to respect this space that they allow the transfer of those data in. I think the challenge, from my perspective, comes when those data are used as ransom, in exchange for services that a person needs, or desires. And so I think that that's one of the challenges here. Our recommendations are focused more on mobile health, on regulated mobile health research. But as these health data start to become integrated into these health applications, I think that we're starting to see a multilayered data problem. And this is, I think, the tsunami that our first commentator was pointing towards, which is as metadata harmonization becomes easier, which it is, we're going to start to see more massively integrated data sets, which include traditional mobile health data, like from my running watch, but then also my direct-to-consumer genomic testing, and also my health record data integrated into a single app to return insights to me about my health. So I think it's just a complex intersection, and this work that we've done, I think, sits just prior to that line, yeah.

- I'll just add to this that , how we think about this matters in terms of, are you directing your question towards regulators, or towards patient consumers, or the app developers. And so in my previous research, which has largely focused on how patients and consumers think about access to their own data, there is a very strong bend towards libertarianism, I guess you would say, in the sense that patients and consumers largely want to have access to everything, whether or not they know what the implications are of having it. And yet there's a very paternalistic sort of approach that's been taken by regulators, and by ethicists, such as my colleagues, not up here of course. But lots of ethicists have argued that people don't know what they're getting themselves into by opening these floodgates towards acquisition of their own data, and that they could be putting themselves in harms way. But I think the pull has largely been going towards the consumer in this space, and the regulators will need to catch up. And in this case I would argue that it's not going to be a matter of ensuring that patients, or consumers, have adequate informed consent, because we don't really know what the implications will be. Like, I think this is speaking largely to the security issues that were mentioned earlier, that we can only give people a sense that we don't really know what the implications are going to be. If you want your information, great. But we can't guarantee that it can be protected from uses that we can't anticipate today.

- And maybe one example that might bring it out would be in genomics, right? So we have a lot of direct-to-consumer genomics, 23andMe, Ancestry, so forth. Most of them implemented this prospective right to download your raw data pretty early, in a way that was actually kind of impressive in a lot of ways. And two things happened. Now you have the follow-on companies, including, my favorite is the one that keeps trying to do spit collection at pro football games. People come into the game pretty drunk, I don't know if you've ever been to an NFL game, but drunkenness is one of the most correlated aspects of stadium entry. And so the idea that you can consent to the external uses of your genome when you're drunk, in a public environment, is pretty low. And then, so that's sort of one aspect of the fast follower there, is that the first companies might be ethical. The companies that are growth-hacking might not be. The second is that if you've got your DNA, you might do something interesting with it, like upload it to an ancestry testing site. Let's say you got it, like I did, from 23andMe. You upload it to GEDmatch and you find cousins, and you find out where your family came from, and how far back the settlers go in your family. Well, you know, that's now finding killers, because cops have decided that they can access that database with warrants. Up until six months ago there wasn't even a warrant needed, they were just collaborating. And so that's a secondary use that can implicate second, third, fourth, fifth cousins, because of your choice. And so that's a great example of an externality that, you know, 23andMe doesn't own the responsibility for that, they just gave you your data back. You decided to upload it. So a big part of this is, once you get your data, to your point, how do we teach people not to upload their data? Because it would be evil, but if you wanted to set up a honey pot that was like, "Find out your risk of Alzheimer's disease, "upload your EHR." And then you have a crappy machine learning model that sends back a prediction that says, "You're safe." Would you like to tweet this, right? It would be really easy to harvest EHRs this way. And so what we don't have is any of the cultural, technical, regulatory framework, that would help catch and punish that. Or, the nice version of it, which would be trying to get you a real prediction for Alzheimer's, and it would be really hard to tell the difference between the two. I think that's where we're going with EHR data. And Arthur is, no.

- [Tom] I think I got it. My name is Tom Sinks, I'm currently at the EPA. I spent 30 years at CDC, so I might comment to you later about your CDC recommendation. But I have three things. First of all, thanks, this is a great space to be in. It's timely, it's difficult, it's complex. It really need simple solutions. There are plenty of other things going on with human subjects research right now, particularly on the regulated area and data sharing, and the mosaic effect, and the release of, you know, the unintentional release of personal identifiers, even with the regulated community, so the unregulated community, it's a great space. One comment, and a couple of questions. Michelle, you asked for comments about other non-regulated researchers. And the things that come to my mind, they may have been in your slide, I didn't see them. Corporate. Corporate is huge. EPA actually has special regulations for third-party research. We require any industry that's providing us research that we would use in an EPA decision on pesticides has to have gone through an IRB, or we won't use it. Has to go through something. We have a FACA committee that reviews that research. Foundation research. Foundations fund a tremendous amount, and associations, you know, trade associations as well. I don't know if there was one on your slides, but those are the three that come to my mind. The questions I have, and these are, one's kind of broad and the other one's specific. First, you use health research as a term, but I believe the Common Rule covers human subjects research, not health research. Health research is a bigger, and my feeling is health research covers a much broader area than human subjects research. There's a lot of unregulated health research done by the regulated community, and so I'd just be careful on that term. I don't know if that was intentional or unintentional, and I could be wrong, so that's one question I have for you. Do you mean human subjects research versus health research?

- We mean unregulated health research.

- [Tom] Okay, but that would also include that being done by the regulated community. So let me give you an example. CDC conducts a lot of surveillance, which, because it's done by public health authorities, the states, that surveillance is considered not human subjects research. It's clearly human research. So whether, you know, these terms are, whether it's research or not research, stuff like that. So I'm just, I think there are examples, evaluation research, that would not be human subjects research, it would be unregulated, things like that. So just wondering about that. The other one. Mark, you specifically said your recommendation 2.6 for NIH. You know, you felt that having a space to give information to the people who need the information, great idea. But I was wondering why you wouldn't want to also recommend that we fund IRBs to be available to those people, should they choose, or need, to have an IRB. I've seen at least two examples where we have citizen science groups out there who are doing human subjects research. We don't own an IRB at EPA, but they have no way to access an IRB unless they go to a private group that funds a lot. So I'm wondering if that's something that--

- Well I agree wholeheartedly with your last comment. But it was felt by the group that we didn't want to recommend funding, we just recommended that this should be investigated, okay? So--

- [Tom] I don't think it would cost that much more to make available IRBs to mass group people to have. And I'm not sure NIH would . But I'm not asking you to have specific people and resources to do this. I think that they're not having IRBs available to people--

- I agree. Maybe you're more persuasive than I was in getting--

- [Tom] It's a lot easier to look at this after the year and a half work--

- Well no. It's, we had a large group of people who did not view everything the same way. And I think the merit of the recommendation is we've now raised that issue. And so I was satisfied that it wasn't prescriptive in terms of how much, who pays, and people who are interested, and think about it, may come to the same conclusion that you did.

- [Arthur] Hi, Arthur Allen from Politico. I had sort of one kind of broader question, and just to throw out there. I don't know if you have answers, but you might have thoughts about this. Which is, what's the dimension, or sort of significance, of this kind of research, compared to traditional research, in terms of drug development or study and population health, and other areas? And I guess, an ethical question would be, is there an ethical dimension to sort of how these studies, these new kinds of studies are incorporated into medicine, like putting findings from your Parkinson's study into drug labels, if there's enough information that comes out of a study like that, that it might have relevance to population groups that are using L-Dopa, say?

- So I can give a general answer, then I actually am gonna ask Megan, 'cause Megan's an actual genetic counselor, and can talk about the personal experience of this. Sort of one of the implications of what we're talking about, and Michelle mentioned this in her comments, is sort of the precision capacity, to get more accurate for smaller groups. 'Cause what we have is very broad-based kind of shotgun medicine. And everyone wants better subgroups. And this is why you want biomarkers, or you want genetic markers, or what have you. And implicit in a lot of this is the ability to segment people into smaller and smaller demographic subsections that might have higher or lower potential benefits from a medicine. What's nice about that is that in the regulated sense the FDA will govern what goes on those labels. I think the bigger issue is, is what happens when people start having their own data from these things, and start making their own interpretations. And that's something that the genetic counseling community has been dealing with for a very long time with genetic data. And so I think the answers are, that's why I wanted to make sure that you got to weigh-in on it from a personal perspective.

- [Arthur] Sort of a follow-up on this. I mean, when I mentioned the ethical dimension, I was sort of like, FDA is not going to be able to get all this data into the label, I guess. So then, is there an ethical dimension to, if you're doing this kind of research, are you finding that it's important to personalized medicine, how do you confuse it, if you have a responsibility?

- [Megan] I think that that's a really good question. So this is something that rare disease groups face all of the time, right? Rare disease groups. Highly driven. Highly phenotype, genotype specific. Really, really trying to find their few people, get them together, and be able to learn more, and find druggable targets. And we're seeing more and more well-funded, parent-driven foundations, frequently. Often we're seeing this in pediatrics, because, understandably, parents will do anything for their children, which is natural and good . But still, that drive can skew the science that comes out of it. And so there is this real challenge, this real balance between empowering science, empowering community-driven science, empowering rare disease science, especially within unregulated mobile health research, there's just so many avenues now. And then when the rubber meets the road when it comes to something like druggable targets and FDA, and we want to avoid what we've seen in the past, which is people trying to do off-label medication use, or create their own, unvetted therapies, and bring those to market.

- And if you look at the FDA, HHS guidance on consent, electronic consent, when it came out, part of what they recognized was the ability of electronic consent to relay changes in the protocol, and how important that was, and how much easier it was to disseminate those. And so I think sort of implicit in this is that you have a faster feedback loop for everything, not just changes to the protocol. And so.

- [Megan] Yeah, I think what we're saying is it's not a new problem, it's just now faster and bigger .

- Yeah, and that's the scale of it. A lot of these are old problems, but the scale, speed, reach of them has, in the last five years, just kinda gone exponential.

- And also some of the players are different now, which suggests that those of us who are really familiar with research ethics and translational ethics, like how do you move research ethics to clinical interventions, may involve different levels and types of education to ensure the translational process doesn't end up creating new or unanticipated ethical problems because of the players involved. And I think that increasingly, we are going to find that in order to address the clinical outputs of this type of research, that we are going to have to bring along the more traditional players, such as physicians who have been, in many ways, very reticent, sorry Kyle, to actually engage with research that's happening outside of traditional research mechanisms. So you know, they may just say, "Oh, well that's unvetted citizen science. "This is research that we're not going to take seriously, "so we're just gonna go "with the traditional clinical research model." And yet the science is going to move forward, perhaps without these players. So while our recommendations here didn't address the downstream implications for clinical, in the clinical context, clearly that's the next step is to try to figure out where on the translational pipeline will there be new requirements around bringing those other players into the fold.

- I just wanted to demonstrate my reticence, since I'm under attack now. I would encourage everyone to go to the website Crohnology, I think it's a .org?

- Crohnology with an H.

- Crohn. It's, the first part is spelled like Crohn's disease. And the founder of that website was a participant in our conversations. Really interesting man and mission that they have to share their insights from different treatments for Crohn disease, many of which are not medical, but do include medical things: diets, exclusion diets, medicines that are being used, but sort of experimenting on themselves to find out what works, and posting that and sharing information. And then just take a look at the website, and then imagine that that's no Crohn's disease, but it is Type 1 Diabetes for children, or it's some other condition, rare genetic condition for children, in which parents are now trying out different things on their kids, and exchanging with one another information about that. That actually exists in large scale on Facebook, where the parents of children, whose children have the same exact genetic variant, or they have the same condition, exchange information with one another. As Megan said, much of that is sort of wonderful and great, and it creates a lot of support and a lot of insights. In some cases, it's the best information we have about how to deal with these conditions. But you can also imagine sort of a dark side to that, where there have been parents of children with autism who are trying out giving bleach, what is essentially bleach to their children, and other sorts of things that create risks. And now it's not just some crazy parent somewhere doing things that no one would endorse, but they're now exchanging information about that and trying it out. So really, you could imagine some wide ranging effects from this. I do just want to point out a lot of what parents exchange is not about medicines either, but also just the use of physical therapy, or the use of speech therapy. You know, many of these thing there, I mean, it's great that parents are able to exchange that information.

- [Yvonne] So my name is Yvonne Lau, I'm from the HHS office for Human Research Protections. So I'm interested in what the committee thinks about the role of institutions, for example, in all this business. I mean, your recommendations cover the federal governments and private institutions and so on. But I'm just thinking, have you considered the role of more traditional academic institutions? I want to say this because I understand that your interest is positions in unregulated health, mobile health research with mobile apps. But so the HRP did a exploratory workshop on big data and health research just a few months ago, and where we positioned ourselves, we're kind of like, we've noticed that there is a lot of research that are outside of the Common Rule, not regulated by the Common Rule, but they're still done by, often by researchers from academic institutions. And then there's also the intersection, where these researchers actually start collaborating with private organizations. And then they, themselves, could then also have their own company where they then start doing research under the rubrics of their company. So that intersection, well, we're more interested in that intersection. That's where we situated our exploratory workshop on. But I think that institutions to us, at that time, our kind of conclusion, if you'd like, seems to be that institutions do actually seem to have a big role to play in this. So I just wonder what you think about that?

- Well we haven't considered the issue of unregulated research conducted by otherwise regulated research entities. I think we did talk about the issue in terms of how academic medical centers, for example, had an institutional commitment to comply with the highest standards, regardless of whether they were legally required, because that's part of their fabric, that's what they promise the patients, that's what they promise their professional staff. And it would totally undermine their commitment. Although you know, we all can think of instances where many of the, quote, "best institutions" made mistakes in research ethics. And so merely having a commitment does not necessarily translate into all the results that you want. I really can't answer that question, other than to say that I think it's a very important issue as well. Several of you have got me thinking about, okay, maybe we have addressed, like, the first generation issues, but not the second and third generation issues that could be raised in different contexts of how research that might normally, if I can put that in quotes, normally be regulated though various arrangements downstream becomes unregulated for one reason or another. And what, if anything, we can do about it. So I thank you both for those comments, and we'll need to give that more thought.

- I think one potential opportunity for conventional academic intuitions to play in this, and have already played in, I think in Sage's early days actually. By partnering with parents organizations, or by partnering with corporations and doing research, a lot of that research that would have been unregulated becomes regulated. And so I think that those collaborations, the academic institutions have pulled a lot of unregulated research into the regulated world, and therefore created a lot of opportunities for, you know, careful review, expert involvement, and those sorts of things, that make what could have been a lower quality type of research now a higher quality, and also subject to external review, which has its own value.

- I'd also just mention that in our forthcoming symposium, the head of an institutional IRB, Pearl O'Rourke, addresses some of these issues. And what I found particularly fascinating when she presented to us as a group was that the ways in which institutions are struggling to accommodate this changing landscape. Where there are different types of entities that are involved in biomedical research, health research very broadly conceived, to determine which various players within institutions need to be involved. It's not simply institutional review boards. But there's a lot of different entities within organizations, and those that are outside of the traditional research mechanisms may not be familiar with all those various steps that are involved. I do think that there's opportunity for this back-and-forth. But my sense is that a lot of these parent-led organizations, for instance, get impatient with the process, and the bureaucracy of institutions. And so our institutions need to become more nimble to address what are clearly dire health needs, and what organizations that feel that their needs aren't being adequately addressed in the contemporary landscape. So it will require some change on both sides. You wanna add more?

- No, I'm good on this one.

- [Emma] Hi, I'm Emma Jardis, I'm from the NIH Department of Bioethics. I was interested in the concept of group harms. And so I was wondering, one, if you have any examples of what that would look like in the context of, like, mHealth. And then also, I'm wondering, a lot of times in the medical system, stereotypes, stigmas, are already reinforced. So is there something different about them being reinforced by like, an app, compared to being reinforced in the actual clinic or with their doctors?

- Look at me. I'll deal with your second comment first. I agree with you that those reinforcing stereotypes of stigma has obviously been a problem for a long time. One of the issues that we see is just sort of the recurrence of sort of testing stereotypes, that, you know, looking at alcoholism in Native American communities. Just doing the study itself, it reinforces the stereotype yet again. And proves yet again that it's an untrue stereotype. But so that sort of thing is obviously, this is just one piece of that trend, or that phenomenon that we want to avoid. But I think it's important to consider in this context. To your first question, I can't come up with a explicit example that has happened in real life, or a particular app or a particular research study conducted via mHealth reinforcing stereotypes or creating stigmas. But I think, you know, just the example that I gave. You could imagine that type of thing being studied through a mobile device and creating a group harm by reinforcing a stereotype. And you could imagine, there's a number of examples outside of mHealth that fits into the more sort of artificial intelligence machine learning world, where this type of work takes stereotypes, takes perceptions that exist in the world, and sort of incorporates them into algorithms, that now reflect the biases and stereotypes that, you know, it's now ingrained in the algorithm, but it really came from those same phenomenon that occur in the real world, so to speak. So I think that's a major concern here. I'll just say one example, but I don't know, John can name like, dozens of these examples that early image machine learning techniques would identify certain African Americans with the term gorilla. And it was incorporating this sort of, the training resources had these sort of built-in problems that then got incorporated into the algorithm. And so that same kind of problem is likely to occur here.

- We also have learned from our colleagues that this relates to broader problems around disparities and connectivity, in terms of who would have access to participate in mobile health research, regulated or unregulated, in terms of data caps that might exists with particular types of mobile platforms that tend to be utilized by lower-resourced populations, which could then skew results of research. So we saw some maps of distribution of types of mobile device platforms. That mapped very closely onto socioeconomic status, and how that then can infuse the results of mobile device enabled research, could inadvertently be reinforcing stereotypes, or generating new types of stereotypes, based on who can actually participate in the research to begin with. You wanna add more?

- Yeah, I would just say that I think that, for me at least, being part of this has revealed a lot of meta dangers, and a lot of meta issues. A lot of these issues are social issues, they just happen to be rayafied inside this field in a very specific way. And the existence of ethics gives us a tool to think about how we might deal with them, in that we don't have in broader consumer tech. But to Michelle's point, if we exclude people because their data plan doesn't let them upload lots of pictures of their body for psoriasis apps, right, we might gather that this is primarily a disease of people who are in high socioeconomic conditions, then the AI or the machine learning system builds a model based out of that; it's fundamentally exclusionary. And worse, it has the veneer of research. Right, there's the veneer of research that's very popular to making claims that says, you know, "Well, it's not just me "who says this about psoriasis, right? "I have the data." And Kadija Ferryman spoke at one of our events a while ago, and she talked about the problem is we think about data as a given instead of as a gift, in the sociological sense. So that we think that the data is equivalent to a fact, instead of simply being a measurement given, it was a gift at a moment in time by a person. Not everyone has the ability to give that gift. And I think, to Megan's point earlier, is these aren't new problems. What's new is the ability to scale them. And I think when you can scale these things, you know, rare occurrences become common at scale. And I think that's one of big takeaways for me in all of this, is that we have to start expecting rare outcomes as having a probability of one, and designing our systems for those. And one of the benefits of regulated research is it tries to prevent context collapse of a claim in science by making sure it's rigorous and repeatable and all of those things. And mobile, by its nature, makes context collapse trivial. And when you think about the impact of one retracted paper on vaccines and autism, and you say, imagine scaling that problem out but removing peer review, and third party ethical review of the studies that provide the justification. The paper could get published in a predatory journal, if you're willing to pay the article processing charge. It's pretty easy to see a path for grifters to exploit this. And if you don't want to go you say it's a pretty good path for people who are hopeful, despite the evidence to go forward. And so I think those are the two groups that we need to try to regulate towards, is to give the hopeful people some context, and to give the grifters some pause.

- [Megan] Do we have any more questions or comments from the audience?

- Going once?

- [Laura] Hi, I'm Laura Hoffman from the American Medical Association. Thank you very much for your presentation. This is more of a comment than a question, but just thought it would be important for us to consider in these discussions, that there are regulations right now that are currently proposed, but will be finalized shortly, that are going to basically provide greater consumer access to their EHR data, that is, of course, currently protected by HIPAA, and provide that data through smartphone apps, for example, at which point the data will no longer be protected to HIPAA. So it relates back to some of the conversation about going from regulated to unregulated data. You know, what happens when EHR data starts becoming much more a part of these mobile apps, whether it's for research that's regulated/unregulated. And unfortunately, there are very, very few privacy protections and guardrails built in around the transfer of this information. So again, there's this kind of balancing act that has to be done between what can be achieved for consumers individually, or as a kind of greater good collective, and to push data out of the unregulated space, into the unregulated space. And I think some of these recommendations will be really valuable in talking about that, how this moves forward in the app space for clinical information that emerges generally. I appreciate that, and I think the transparency pieces that were brought up today are really applicable to that space as well, so thank you.

- Well just a brief response. I think you're referring to the interoperability regs that have been proposed in March, and, , and I think they're very troublesome. The promise of electronic health records was that they were gonna be comprehensive, longitudinal, and interoperable. We made it to the first two, but if we adopt interoperability without any protections, and I would certainly emphasize the need for segmentation of health information, then we are just facilitating the transfer of everything with very few protections, and there aren't too many people who are following this, unfortunately. And I'd like to think we would, as a country, like to have greater control over our health information. We're close to giving away the store.

- [Megan] Any other final comments? And from our panel, any other final comments?

- I've said too much already.

- Well I would just add one closing comment. And that's to thank all of you for coming this morning, on a windy, rainy day in Washington. And I hope we'll have a chance to interact in the future with all of you. Because as you know, these are very complicated issues, and will take some working through. I didn't mention this before, but you probably figured it out, that the recommendations were intended to be free-standing. It's not a whole package that we think everybody's gonna, you know, simultaneously implement. And the thought was that if some of these would be implemented, it would be helpful and maybe generate the momentum for doing more. And so we're anxious to see what happens once it comes out. So thank you again, and this is our final sort of official meeting. And I want to thank my colleagues who are here, and those who may hear of this for all their support over the last three years. So thank you.

- Thank you.