The GSK.ai/Stanford Ethics Fellowship

Posted February 08th 2022

Organisation
Stanford Center for Biomedical Ethics (SCBE)

Location
Stanford. CA

Closing date: Open until filled

The GSK.ai/Stanford Ethics Fellowship 

The Stanford Center for Biomedical Ethics (SCBE) and GSK.ai announce two to three post-doctoral fellowship positions focused on exploring the ethical challenges associated with  using artificial intelligence and machine learning (AI/ML) to discover transformational medicines and to deliver improved clinical outcomes.

The post-doctoral fellowship positions are available as part of the Stanford Training  Program in Ethical, Legal, and Social Implications (ELSI) Research and at the Center for  Integration of Research on Genetics and Ethics (CIRGE). Candidates from underrepresented groups are strongly encouraged to apply. 

Key facts 

• Co-Principal Investigators and Program Co-Directors (all at Stanford University):  Russ Altman, MD/PhD (russ.altman@stanford.edu), Mildred Cho, PhD, and David  Magnus, PhD  

• Funding source: GSK.ai, which is an interdisciplinary tech organization inside  GlaxoSmithKline, leveraging AI/ML to enable data driven drug development &  clinical applications.  

• Appointment: Two years, renewable with mutual decision of the candidate and program.  

• Qualifications: PhD and/or MD. We are seeking candidates with a background in artificial intelligence, machine learning, life sciences and/or medicine, with a  demonstrated interest in ethics, philosophy, social science, public policy, or other related disciplines. 

• Location: Candidates will be based at the Stanford Center for Biomedical Ethics at  Stanford, Palo Alto. 

Job description 

The postdoctoral fellows will conduct independent research on ethical, legal and social considerations arising from the use of AI in the pharmaceutical industry, from early-stage  drug discovery efforts to downstream clinical applications. Representative topics for investigation are outlined below; similar independent proposals are encouraged.  

Through exposure to AI researchers, scientists, and ethicists at GSK.ai, fellows will gain an inside view of AI-driven biomedical research at a large pharmaceutical company, while retaining editorial independence in their research under the supervision of Stanford  University faculty. Fellows will also be part of an interdisciplinary academic community at  Stanford University, including faculty and fellows from this program and other affiliated programs, including the Stanford Institute for Human-Centered Artificial Intelligence  (hai.stanford.edu). Fellows are expected to gain practical experience in professional activities through programs such as the Stanford Benchside Ethics Consultation Service, a  research ethics consultation program to assist life sciences researchers in the resolution of ethical concerns in their research, one of the Stanford-affiliated clinical ethics consultation services, and/or teaching.  

In addition to participating in SCBE and CIRGE activities fellows will have access to a full range of courses, which includes genetics, social science, humanities and law courses. It is expected that the fellows may need formal coursework in genetics, ethics, or ELSI research methods. Mentors will assist the fellow in formulating an individualized curriculum and career strategies. All trainees will be expected to present their research in scholarly venues. Fellowship support includes a stipend, tuition, and health insurance. Funds will be 

provided for each fellow to travel to one meeting per year. See below for more information about the Stanford Training Program in ELSI Research. 

Application 

Stanford University and GSK profoundly appreciate the value of a diverse, equitable, and inclusive community. We define diversity broadly and we are committed to increasing the representation of groups and geographies which are underrepresented in research. To facilitate these efforts, appropriate support will be put in place to ensure that candidates can be sourced from a world-wide talent pool, including from low- and middle-income countries,  which are the focus of several of our research priorities. 

To apply, please fill in the application form at https://bit.ly/gsk-stanford. Application deadline:  Rolling review of applications for a start date between January and September 2022. For questions, please email the Fellowship Coordinator, Megan Cvitanovic  (mcvitano@stanford.edu). 

The chances of discovering a new medicine and bringing it to market are notoriously slim.  Pharmaceutical firms spend years conducting research only to see most of their potential  remedies fail before they ever reach patients. GlaxoSmithKline (GSK) is embracing new 

technologies and massive amounts of digitized health data that have emerged in recent years to speed up and improve drug development. At the heart of these efforts is the use  of artificial intelligence (AI) and machine learning (ML) to analyze vast sets of biological and genetic data. 

While AI/ML has enormous potential to transform drug discovery and clinical practice, the datasets used by GSK’s researchers, the algorithms they build and any AI-driven decisions taken at each of the different stages during drug development have safety implications for patients, raise important ethical questions and will be subject to novel regulations. GSK wants to address these ethical questions, develop ethical and safe AI  products, and shape the conversation on responsible AI in healthcare. 

The responsible application of AI in drug discovery and clinical practice is a newly emerging field that poses unique challenges, which require original research to find novel ethical, policy and technical solutions. To support this effort, GSK has partnered with the  Stanford Center for Biomedical Ethics (SCBE) to fund several post-doctoral fellowship positions focused on exploring the ethical challenges associated with using artificial intelligence and machine learning to discover transformational medicines and to deliver improved clinical outcomes. 

About GSK.ai 

GSK.ai is a global network of researchers centered around AI hubs in London and San  Francisco — where experts from various quantitative fields consider problems from different perspectives. The team consists of a diverse mix of scientists, software engineers, clinicians, and ML researchers, who carry out their own research, develop ML  algorithms and AI products, and collaborate with an interdisciplinary team of experts from top machine-learning researchers at Cambridge and Oxford to Silicon Valley software engineers.  

It is this approach to technology that sets GSK apart, says Dr Hal Barron, a medical doctor, formerly at biotech pioneer Genentech and now chief scientific officer and president of R&D at GSK. “It’s genetics, functional genomics and the interpretation of the data they generate with machine learning that forms the core of our strategy. What  machine learning has been able to do—particularly over the past two or three years—is  deconstruct these massive data sets and elucidate the relationships that the various  genes have with each other.” 

Machine learning could help with drug discovery in countless ways. Among them are adaptive clinical trials in which machine learning assists in the identification, approval and distribution of treatments and vaccines. It could also help develop individualized treatments, nudging the pharma industry away from a “one drug for everyone” approach and towards treatment based on an understanding of which drugs will work for whom. 

About the Stanford Center for Biomedical Ethics

Established in 1989, the Stanford Center for Biomedical Ethics (SCBE) is an interdisciplinary hub for faculty who do research, teaching, and service on topics in  bioethics and medical humanities. SCBE and its faculty have been widely recognized for leadership on a range of issues, such new approaches to studying the ethical issues presented by new technologies in biomedicine, including Artificial Intelligence, CRISPR  and Gene Therapy, Stem Cell Research, Synthetic Biology, and the Human Brain  Initiative. 

SCBE was among the first in the field to be designated by NIH as a Center for  Excellence in Ethical, Legal and Social Issues (ELSI) in Genetics and Genomics. And  SCBE is now the Coordinating Center for ELSI research at the National Human  Genome Research Institute. 

SCBE faculty teach throughout Stanford, from large undergraduate courses to ethics courses taken by all medical students. SCBE offers training for medical students who choose to specialize in biomedical ethics and medical humanities, as well a highly-rated  research ethics seminar taken by over 350 graduate students and fellows a year.  

Representative Topics for Investigation, alternative topic proposals are welcome. 

• Datasets from genome-wide association studies (GWAS) are used by AI systems to identify promising targets for new medicines, but in many genetic datasets, low-income countries, minority groups, and poorer communities are underrepresented. As gene expression is impacted by both genetic heritage and environmental factors, this means that AI driven early-stage drug discovery efforts may lead to the development of treatments that are less effective for understudied populations. To eliminate dataset bias and prevent its manifestation in downstream medicines and AI systems, research organisations are aiming to collect more data from under-represented groups. When this data collection is undertaken, how should research hesitancy in underserved communities be overcome and how should the questions of justice be addressed, both  in terms of compensating communities for their data and making sure that poorer communities that bear the burden of research also receive its benefits — because medicines and AI systems developed using their data may not be available or affordable  to under-resourced groups and geographies. How should individual concerns over privacy and developing country concerns over data neo-colonialism be taken into  account? 

• The types of ML models involved and the need to identify weak signals mean that AI driven drug-discovery relies on large-scale datasets, such as genome wide association  studies (GWAS), knowledge graphs, automated high content imaging of perturbed cell  cultures, or electronic health records (EHR) from large biobanks. As the use of ML driven research and data analysis becomes increasingly prevalent, there is a risk that  the focus of AI-driven drug development could be biased towards accessible,  established databases, such as the UKBioank. Notably, many of these data sources  suffer from historical biases in experimental data or literature corpora, and addressing  these biases, e.g., through adding patients from currently underrepresented groups,  may take decades. Should we develop medicines based on the accessibility of data  associated with a disease, e.g., diseases with a significant genetic component, or  should the development of medicines be driven by greatest need, e.g., infectious  diseases? Should this assessment depend on the success probability of a drug  discovery effort? How should we think about AI-driven drug development relying on  biased large-scale genetic or EHR databases? 

• Some areas of early-stage drug discovery lend themselves to a ML approach called  active learning, where the model can query an information source to provide new  datapoints. This approach is also referred to as optimal experimental design because  the model is used to design the next experiment, such that it can gain the maximum  amount of new information from the resulting data. During active learning, hypotheses  are generated and tested by the model, without input from AI researchers, and the  model optimizes itself in the process. How could we ensure alignment of such an AI  system with both experimental goals and ethical principles, to prevent unintended and  potentially harmful algorithmic strategies? How should we think about the autonomy of  the researchers in this scenario? Who is to be held accountable for decisions made by  the active learning system? What restrictions might we impose on the type of  experiments performed by model-driven active learning?  

• A companion diagnostic (CDx) is a medical device that provides information that is  essential for the safe and effective use of a corresponding drug. In the field of oncology, 

while CDx have historically been based on genetic screening or the assessment of a  pathologist, future CDx could be based on artificial intelligence systems that leverage  vast amounts of data found in whole slide images (WSI) of tumor biopsies or other  complex biomarkers. How should clinical decision-making change as AI systems move  from replicating analyses that could be performed by a human pathologist to unlocking  patterns that are too complex for humans to either understand or to identify? How do  these questions interact with patient autonomy, AI explainability, AI safety, and the role  of healthcare professionals? Who carries the responsibility for AI-based treatment  decisions? How should these evolving AI capabilities interact with healthcare  professionals and patients at each stage of the technology’s development? 

• Individual data, in particular genetic and patient data, is increasingly being used in AI based drug discovery and clinical decision making, and potentially contains  comprehensive and very sensitive information about an individual. To prevent data  misuse, what would be the privacy protections, “terms of service” or compensation  required for genetic and patient data, and are they fundamentally different from  consumer data? The collection of patient data for the purpose of training ML model also  raises concerns about the idea of “informed consent”. For example, patients from under resourced communities may not be in a position to conceive what their data is being  used for. Furthermore, they may not have the financial option or legal means to abstain  from providing sensitive data for profit. Do we need to re-evaluate our notions of patient  consent and compensation under these circumstances? How should we think about  patient consent in general, where data is used e.g., to train complex AI systems that a  lay person cannot reasonably understand? 

• Before AI systems can be trained and tested on a given dataset, the data usually  undergoes pre-processing, and sometimes substantial curation. This may include steps  such as standardization, imputation of missing data, or removing outliers, if their  inclusion violates model assumptions or impedes training. How should we think about  “throwing away” samples, given that patients may have undergone additional  procedures (blood test, imaging, etc) to provide the data? Are we discriminating against  minority groups whose data are excluded during model development? In some  circumstances, data curation may be the only way to obtain an AI system at all, given  the current technical state-of-the-art. Under these circumstances, is it justified to develop  an AI system on curated data? 

• While ML algorithms can generally be run with the computational resources available on a  standard PC, many state-of-the-art models rely on expensive computational infrastructure  (local and/or cloud-based services). By developing resource-intensive AI systems, are we  effectively preventing their use in low- to middle-income countries that cannot provide the  

compute infrastructure? If local computational resources are not sufficient to guarantee the  highest accuracy – for example, because a simpler algorithm with lower predictive power  has to be substituted – should we deploy the system at a lower standard? Furthermore,  many AI safety mechanisms, such as estimating the confidence of model predictions, often  require additional computational resources. To what extent can we accept to lower safety  requirements, if the compute infrastructure requirements are otherwise prohibitive? 

The Stanford-GSK.ai Training Program in ELSI Research

The Stanford-GSK.ai Training Program in ELSI Research builds on Stanford University’s 15  years of experience in developing independent scholars to become members of the ELSI  research community. Trainees are making their contributions through research on ethical,  legal, social or policy implications of genetics and genomics, teaching, service, and training  and mentoring the next generation of ELSI researchers. The program focuses on  developing a knowledge base of core bioethics and ELSI concepts and literature and  sufficient understanding of a variety of disciplines to be able to carry out innovative research  in the types of interdisciplinary and collaborative settings required by ELSI scholarship. The  program capitalizes on being situated in the Center for Integration of Research on Genetics  and Ethics and the Stanford Center for Biomedical Ethics, and strong relationships with  Stanford faculty mentors in the Schools of Medicine, Law, and Humanities and Sciences.  

Program: Multi- and interdisciplinary three-year postdoctoral training.  • Trainees: PhDs and PhD candidates are recruited from diverse backgrounds,  including genetics, biological sciences and engineering, medicine, computer and  information sciences, philosophy, health services research, anthropology, and other  social sciences. 

Mentors: We use a multiple mentorship model, tailored to individual trainee needs  and interests. Trainees are assigned a primary mentor responsible for overall  development of the trainee’s plans, and secondary mentors assigned based on  specific career, research methods, and topic area needs. Mentors come from  diverse backgrounds including bioethics, social sciences, medicine, genetics,  genetic counseling, medical genetics, health policy, health services research,  philosophy, law, history, business, bioengineering, psychiatry, psychology and  informatics. Individualized career mentoring is an important aspect of the program. 

Program Co-Directors: Mildred Cho, PhD, the current director of the research  training program, will be joined by Holly Tabor, PhD as a Co-Director, bringing her  experience in ELSI research and training. 

Program Faculty: 14 Program Faculty from 9 primary departments and centers  participate, representing Schools of Medicine, Humanities and Sciences, Law, and  Engineering who conduct ELSI-relevant research. Core Faculty members Holly  Tabor, PhD David Magnus, PhD, Hank Greely, JD and Kelly Ormond, MS are  experienced mentors in the program. Four new faculty were added, broadening the  range of opportunities for ELSI research projects and methodological approaches for  fellows. 

Education Program: The individualized training program for each trainee includes  core courses in bioethics, and human genetics, and program-specific ELSI seminars,  providing rich interdisciplinary interaction with faculty and trainees from this program  and from other training programs. A wide range of elective courses are available at  Stanford University. Practicum training in research ethics consultation and clinical  ethics consultation and opportunities to gain teaching experience are key  components. Career development opportunities include participation in an award winning Grant Writing Academy and the opportunity to obtain a teaching certificate  through Stanford University. 

Research Program: Mentored research by trainees bring together faculty from  diverse disciplines to identify and address important and novel ELSI issues through  empirical or normative research. Trainees have numerous opportunities to conduct  research as part of ongoing ELSI projects as well as developing an independent 

research agenda, presenting their research at professional meetings and publishing  in peer-reviewed journals.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.