Meet the speakers who make SciPy 2020 fascinating.
To all our speakers - thank you.
Speakers & Authors
Dr. Carpenter is an Institute Scientist and Merkin Fellow at the Broad Institute of Harvard and MIT. Her research group develops algorithms and strategies for large-scale experiments involving images. The team’s open-source CellProfiler software is used by thousands of biologists worldwide (www.cellprofiler.org). Carpenter is a pioneer in image-based profiling, the extraction of rich, unbiased information from images for a number of important applications in drug discovery and functional genomics. Carpenter completed her postdoctoral fellowship at the Whitehead Institute for Biomedical Research and MIT’s Computer Sciences/Artificial Intelligence Laboratory (CSAIL). Her PhD is in cell biology from the University of Illinois, Urbana-Champaign. Carpenter has been named an NSF CAREER awardee, an NIH MIRA awardee, a Massachusetts Academy of Sciences fellow (its youngest at the time), a Genome Technology “Rising Young Investigator”, and is listed in Deep Knowledge Analytics’ top-100 AI Leaders in Drug Discovery and Advanced Healthcare.
Dr. Andrew Chael is a NASA Hubble Fellowship Program (NHFP) Einstein Fellow at the Princeton University Center for Theoretical Science. He is a member of the Event Horizon Telescope (EHT) collaboration.
Andrew studies the extreme environments just outside the event horizons of supermassive black holes, particularly in the galactic center Sgr A* and M87. He uses supercomputer simulations to study the flow of plasma as it falls into the black hole and is launched outward in relativistic jets. He also develop new approaches for testing theoretical predictions with new methods of imaging these black holes with the EHT.
Andrew grew up in Albuquerque, New Mexico. He received a BA in Physics with a secondary concentration in Medieval studies from Carleton College in 2013, and earned a Ph.D. in Physics from Harvard University in 2019. At Harvard, Andrew worked at the Black Hole Initiative;
Andrew is a proud member of the Astronomy and Astrophysics Outlist of LGBTQIA+ members of the astronomical community.
Dhavide Aruliah is currently Director of Training & Data Residency at Quansight LLC. He has over 25 years of experience teaching & mentoring both in academia and in industry; his career has grown around bringing learners from where they are to where they need to be mathematically & computationally. He was formerly a university professor (Applied Mathematics & Computer Science) at Ontario Tech University before moving to industry to become the Director of Training at Anaconda Inc. While at Anaconda, he oversaw the creation of 5 courses at the heart of DataCamp’s Python Data Science curriculum that have over a quarter of a million completions. He has taught over 40 undergraduate- & graduate-level courses at five Canadian universities as well as numerous Software Carpentry & PyData tutorial workshops.
I’m a Software Engineer at Quansight where I contribute to and help maintain projects in the scientific Python ecosystem. I also work on consulting projects to help clients leverage open-source tools for data science applications. Details on my open-source contributions are available on GitHub.
Additionally, I co-organizer the monthly MadPy Python meetup.
I hold a Ph.D. and M.S. from the University of Wisconsin-Madison where I studied experimental astrophysics as a member of the IceCube collaboration. Previously I attended the University of Texas at Arlington where I received a B.S. in Physics
Hugo is a data scientist, educator, writer and podcaster at DataCamp. His main interests are promoting data & AI literacy, helping to spread data skills through organizations and society and doing amateur stand up comedy in NYC. If you want to know what he likes to talk about, definitely check out DataFramed, the DataCamp podcast, which he hosts and produces:
Freelance / QuantStack
I am a PhD student at Virginia Tech (VT) in Genetics, Bioinformatics, and Computational Biology ( GBCB ) working with Anne Brown and DataBridge studying data science education and pedagogyfor medicial practitioners.
Former RStudio intern working on the gradethis package.
Arch Linux user maintaining the RStudio Preview , nteract , and Rodeo packages in the AUR .
Author of Pandas for Everyone .
National Center for Atmospheric Research
I am a physical oceanographer. I am part of the Ocean Section of the Climate and Global Dynamics Lab at the National Center for Atmospheric Research in Boulder, Colorado. Currently, I study problems that involve
small-scale turbulence & mixing, baroclinic instabilities and mesoscale oceanic eddies, continental shelf flows equatorial flows. using observational records from moored, shipboard and satellite platforms; as well as high resolution realistic and idealized numerical models. I like software tools that enable easy, intuitive, convenient and scalable analysis of datasets, both big and small. So I help with building and maintaining xarray; and contribute to the wider Pangeo ecosystem of software tools (such as xgcm).
Matt has been using Python to work with data in science and at startups since 2008, after getting degrees in Astronomy and Aerospace Engineering. He maintains some moderately popular open-source Python libraries, including SnakeViz and Palettable. Today Matt is the lead software engineer at Populus, a startup helping city governments manage various aspects of transportation.
Research Applications Laboratory, National Center for Atmospheric Research
Joe Hamman is a computational hydroclimatologist. He has a Ph.D. in Civil and Environmental Engineering from the University of Washington. He is currently a post-doctoral fellow at the National Center for Atmospheric Research in the Computational Hydrology Group (part of the Research Applications Laboratory’s Hydrometeorological Applications Program) where his research focuses on developing and evaluating computational hydrologic models to better understand the sources of uncertainty in future hydrologic projections. He is an active developer of a number of open-source Python projects, including Xarray and Pangeo.
Currently, I am a software engineer and researcher at Google on the Accelerated Sciences team, which leverages Google’s unique expertise to solve scientific research problems. For example, my team has used deep learning for drug discovery.
Previously, I was a research scientist at The Climate Corporation, where I worked on models of weather and climate risk for agriculture. At Climate, I wrote xarray, a library for labeled arrays in Python.
I am also a frequent contributor to the open source scientific Python stack.
I have a Ph.D in Physics from UC Berkeley (2013), and a B.A. in Physics from Swarthmore College (2008).
University of California, Riverside
I am the Associate Director of the Center for Geospatial Sciences at the University of California-Riverside
I study social inequality and spatial structure in neighborhoods, cities, and regions
My work informs urban policy and planning with applied spatial data science and open-source software development
Novartis Institutes for Biomedical Research
Eric is a data scientist at the Novartis Institutes for Biomedical Research. There, he conducts biomedical data science research, with a focus on using Bayesian statistical methods in the service of making medicines for patients. Prior to Novartis, he was an Insight Health Data Fellow in the summer of 2017, and defended his doctoral thesis in the spring of 2017.
Eric is also an open source software developer, and has led the development of nxviz, a visualization package for NetworkX, and pyjanitor, a clean API for cleaning data in Python. In addition, he has made contributions to a range of open source tools, including PyMC3, matplotlib, bokeh, and CuPy.
His personal life motto is found in the Luke 12:48.
Mike McCarty is a Director of Software Engineering at the Capital One Center for Machine Learning, where he leads a team of engineers building scalable data science and machine learning tools. Mike also contributes to open source software projects in the PyData ecosystem. He has over 15 years of experience in software engineering and scientific computing in astronomy, computational sciences, data science, machine learning, and enterprise products.
Eric holds a Ph.D. in history from the University of Pennsylvania, a M.S. from Pennsylvania State University, and a B.A. in computer science from Utah State University.
Before joining Enthought, Eric spent three decades working in software development in a variety of fields, including atmospheric physics research, remote sensing and GIS, retail, and banking. In each of these fields, Eric focused on building software systems to automate and standardize the many repetitive, time-consuming, and unstable processes that he encountered.
King Abdullah University of Science and Technology
Dr. David R. Pugh is a staff scientist with the King Abdullah University of Science and Technology (KAUST) Research Computing Core Labs where he provides data science training and consulting services to KAUST students, faculty, and research scientists. David is a certified Software and Data Carpentry instructor with extensive teaching experience having taught Software and Data Carpentry workshops in Japan, Saudia Arabia, and the UK. At KAUST David is the lead instructor of the popular [Introduction to Data Science Workshop Series](https://github.com/kaust-vislab/introduction-to-data-science-workshop) where he teaches programming in Python, Bash Shell, and SQL and best practices for reproducible research using Git, Conda, and Docker.
University of California Riverside
Sergio Rey is Professor in the School of Public Policy and Founding Director of the Center for Geospatial Sciences at the University of California, Riverside. Rey’s research interests focus on the development, implementation, and application of advanced methods of spatial and space-time data analysis. His substantive foci include regional inequality, convergence and growth dynamics as well as neighborhood change, segregation dynamics, spatial criminology and industrial networks. He is co-founder and lead of PySAL: Python Spatial Analysis LIbrary.
Conference Speakers & Authors
University of Michigan
I am a Ph.D. Candidate in the Department of Chemical Engineering at the University of Michigan, Ann Arbor. The research I conduct under Sharon C. Glotzer, is focused on data-driven design of novel soft matter materials.
As part of my research, I am the lead-developer of the open-source data management framework signac, supporting the scientific community in the application of best-practices in scientific computing. As a contributor to the general-purpose particle simulation toolkit HOOMD-Blue, I strive to make signac highly integrable with compute-intensive simulation workflows on leadership-class supercomputers.
Kavli Institute for Cosmology, Cambridge and Cavendish Laboratory, University of Cambridge
Fruzsina is in her third year of a PhD in cosmology at the University of Cambridge. She obtained an MSci and BA from the University of Cambridge, studying experimental and theoretical physics at Gonville and Caius college. Her research focuses on developing computational methods for studying the evolution of the early universe, and on aspects of quantum field theory - the framework in which contemporary particle physics is formulated. During her PhD studies she worked at the British Antarctic Survey for 6 months to develop code to forecast sea ice extent over the Arctic using convolutional networks. She has previously interned at the Institute of Astronomy, Cambridge, working on automating the subtraction of host galaxy contamination from images of active galactic nuclei (supermassive black holes), and at London-based start-up Kokoon Tech Ltd., classifying sleep stages from electroencephalogram signals using machine learning.
I'm a machine learning engineer at Talkspace, where I build NLP-powered tools that enhance asynchronous talk therapy. My current work at centers around early detection and continuous monitoring systems for acute mental health conditions and patient-therapist matching. I have a Masters in Public Health with a specialization in sociomedical science and public health informatics, and prior to that a background in developmental biology and immunology. My research interests include reinforcement learning, automated machine learning, and fairness, accountability, and transparency in machine learning. I like developing open source tools for improving data science and machine learning practice.
I work on the design and implementation of high performance programming systems including languages, compilers, and runtimes. In particular, my focus is on building systems that enable both high performance and high productivity for large scale system architectures including both supercomputers and distributed systems. I also have considerable experience constructing tools capable of leveraging the power of GPUs for general purpose computing. I believe that high performance computing should not simply be the domain of experts, but should be a tool available to all via the intelligent implementation of programming systems. My design philosophy is that these systems should be built the right way and not with the ad-hoc tools available for users today. I endeavor to make this vision a reality by designing abstractions capable of facilitating the needs of end users with minimal cognitive load. My hope is one day the design of programming systems will be an approach leveraged by all programmers and not just experts. About Me I currently work as a research scientist at NVIDIA Research. In 2014 I completed my Ph.D. in Computer Science at Stanford University under the guidance of Professor Alex Aiken. Before that I graduated Magna Cum Laude from Duke University in 2008 with degrees in Electrical Computer Engineering (with honors), Mathematics (with high honors) , and Computer Science.
I am a Research Engineer working on the Econ-Ark suite of macroeconomic simulation tools. These open source tools for structural modeling with heterogeneous agents are fiscally sponsored by NumFOCUS.
Monica Bobra is a research scientist at Stanford University in the W. W. Hansen Experimental Physics Laboratory, where she studies the Sun and space weather as a member of the NASA Solar Dynamics Observatory science team. Her research focuses on analyzing large data sets, on the scale of terabytes to petabytes, that describe the Sun and space weather. She also serves as Vice Chair of the advisory board for SunPy Project and an editor for the Journal of Open Source Software (JOSS).
Freelance / QuantStack
University of Michigan Sharon Glotzer Lab
Brandon is a second year Chemical Engineering PhD candidate at the University of Michigan. His adviser is Prof. Sharon Glotzer. His research centers on developing analysis methods for particle based simulation using concepts from information theory and on the self-assembly of colloids. In addition to his research, Brandon contributes to multiple open source scientific Python packages such as HOOMD-blue and signac.
When not on his computer, Brandon enjoys spending time with family and friends, exercising, cooking, and exploring the outdoors.
Joe Eaton is the Principal System Engineer for Data and Graph Analytics at NVIDIA. He has spent the last 7 years at NVIDIA working on applications of sparse linear algebra: CUDA Libraries, cuSOLVER, cuSPARSE, nvGRAPH, AmgX and now 100% on RAPIDS, developing Python APIs, cuML and cuGRAPH. RAPIDS is an end-to-end platform for data science, including IO, ETL, model training, inference and visualization. Previously he spent 18 years in Oil & Gas reservoir simulation and HPC . He is a frequent speaker at SC, GTC, and directly interfaces with engineers and mathematicians across industries.
University of Illinois at Urbana-Champaign
Matthew is a postdoctoral researcher at the University of Illinois at Urbana-Champaign with research expertise in experimental high energy physics. He works as a member of the ATLAS collaboration as part of the effort to discover physics beyond the standard model with experiments preformed at CERN's Large Hadron Collider (LHC) in Geneva, Switzerland. He is also a researcher at the Institute for Research and Innovation in Software for High Energy Physics (IRIS-HEP). He previously did his Ph.D. (2019) research at Southern Methodist University, also on the ATLAS experiment, in searches for high momentum low mass resonances decaying to two b-quarks and produced in association with a jet.
Emmanuelle Gouillart is a scientific Python developer at Plotly, where she is a core developer of the plotly.py visualization library. She has a background in physics and materials science, and she has carried on scientific research and software development during the last years. She became a core contributor of Python’s popular image processing library scikit-image since a large part of her research relies on extracting quantitative data from image datasets. She has been a co-organizer of the Euroscipy conference during the last ten years, and she enjoys very much discussing with Python users about image processing and visualization at conferences.
University of Texas at Austin
Sean is interested in tectonic-climate interactions, the role of catastrophism in the geologic record and marine geophysical imaging at nested resolutions. His current projects include tectonic and climate interactions in the St. Elias Mountains and Surveyor submarine fan, geohazards and margin evolution of subduction and transform faulting in Alaska, Sumatra, and Japan, and the geologic processes and environmental effects of the Cretaceous-Paleogene Chicxulub meteor impact.
University of Texas at Austin
I am a scientist and technologist living in Austin, Texas. I characterize stars and exoplanets using high resolution spectroscopy and probabilistic inference. I recently joined the Exoplanet Atmospheres research group with UT Austin Professor Caroline Morley. We want to measure the atmospheres of exoplanets, with a long view of finding out which exoplanets are habitable. I specialize in quantifying astrophysical uncertainty in the atmospheric measurement process. Previously I worked for the K2 mission, which continues to revolutionize our understanding of exoplanets and stellar activity among other topics. I contribute to the scientific Python ecosystem including the lightkurve toolkit for time series data anlysis and the spectral inference framework Starfish. For my PhD I led metrology with the UT Austin Silicon Diffractive Optics Group, where I lithographically patterned immersion gratings. During a NASA GSRP fellowship at the JPL Microdevices Lab I improved the precision of these devices even further. Recently I’ve become excited about GPUs, autodiff, and other modern numerical innovations that stand to catalyze astronomical discovery.
Brendon holds a Ph.D. in mechanical engineering from the University of California Santa Barbara, and a B.Eng. in mechanical engineering as well as B.Sc. in computer science from Western University in Canada.
With more than 10 years of industry experience, Brendon is a recognized leader in machine learning who understands the science of oil and gas exploration. His experience as a computational geologist at ExxonMobil and as a research geophysicist at ION Geophysical add to his background in software development to enable cross-functional discussions that drive business value across oil and gas organizations.
I am in the last stretch of my PhD with Heidelberg University and I conduct my research in the Saalfeld lab at HHMI Janelia in the Washington DC metro area. I am interested in scalable neuron reconstruction from large 3D electron micrographs (EM) and high-quality tools for efficient visualization, generation of dense ground truth annotations, and proof reading connectomics in 3D EM. Previously, I worked on cell tracking in developing embryos and used C++/Python with the SciPy and NumPy frameworks for image processing. My current work builds on top of the ImageJ eco-system, most importantly the multi-dimensional image processing library ImgLib2. I created imglyb to create a bridge between NumPy and ImgLib2 data structures with shared memory access. Now, it is possible to combine those frameworks in a single-process workflow. For example, multi-dimensional NumPy arrays can be visualized in powerful Java tools like BigDataViewer or Paintera.
Michael holds a Ph.D. in polymer science from The University of Akron and a B.S. in materials science and engineering from the University of Illinois at Urbana-Champaign with expertise in polymers for optoelectronic applications.
Prior to joining Enthought, he worked as a postdoctoral researcher at several institutions, where he developed improved physical models for organic electronic devices that connect the complex materials microstructure to semiconductor device physics. This research was enabled by custom open source software tools for physics-based device simulations, automated experimental measurements, and advanced data analysis.
Levi Strauss & Co.
I am a philosophy PhD turned data scientist who has worked on problems involving natural language at the University of California, as well as at IBM and now Levi Strauss. I have experience teaching as well as giving presentations. I have given academic presentations at various philosophical conferences (including the major American and Australian philosophy associatio0ns) as well as more technical, data science conferences (including DeveloperWeek and Dataiku events). I have also taught intensive courses in data science internally at IBM to new hires.
Princeton Plasma Physics Laboratory
Ralph Kube joined the Princeton Plasma Physics Laboratory as a research physicist in 2019 and is a member of the US SciDAC Center for High-Fidelity Boundary Plasma Simulations (HBPS), led by Principal Investigator Choong-Seock Chang. His research interest lies in understanding the edge and scrape-off layer regions of magnetically confined plasmas. Ralph's current work includes applying machine learning methods to accelerate the full-f gyrokinetic particle-in-cell code XGC1. He also develops novel workflow tools tailored to automatically analyze big and fast data streams produced by fusion experiments and high-fidelity numerical simulations.
Ralph graduated with a B.Sc. in Physics from RWTH Aachen University in 2004. He received a M.Sc. and a PhD in Plasma Physics from UiT The Arctic University of Norway. After that he worked as a postdoctoral research fellow at the University of Innsbruck in Austria and at UiT in Norway where he investigated the statistical properties of intermittent fluctuations observed in scrape-off layer plasmas.
Center for Astrophysics | Harvard & Smithsonian
Kelvin Lee is a postdoctoral researcher at the Center for Astrophysics | Harvard & Smithsonian, where he applies physical chemistry and rotational spectroscopy to understand the physics and dynamics of astrophysical processes. His research involves using laboratory analogs of interstellar conditions to discover unobserved, highly reactive molecules that may be of interest to questions regarding star and planet formation and the origins of life.
Aside from astrochemistry and spectroscopy, his passions are in open source and reproducible (data) science, machine learning, and statistical inference.
Fritz Lekschas is a Ph.D. candidate in computer science at Harvard School of Engineering and Applied Sciences with a strong passion for scalable biomedical data visualization, bioinformatics, web development, and design. As part of his Ph.D., Fritz Lekschas joined Peter Kerpedjiev and Nezar Abdennur in building the HiGlass framework (https://higlass.io) and several related tools (http://genomevis.lekschas.de). For more information visit his personal website at or his Twitter feed at Fritz always writes about himself in the third person because it sounds so regal!
Johns Hopkins Applied Physics Laboratory
Jon is a data scientist and software developer with the Air and Missile Defense Sector of the Johns Hopkins Applied Physics Laboratory. He has interest and experience across a range of technical domains, programming languages, data visualization techniques, and data science technologies. Jon holds bachelors’ degrees in Physics and Mathematics from Millersville University, a master’s degree in Computer Science from Johns Hopkins University, and is currently pursuing a Master of Information and Data Science degree from UC Berkeley.
I am a postdoctoral researcher and lecturer in the Statistics Department of Stanford University, USA. I work with Susan Holmes, Russell Poldrack and Xavier Pennec. My research aims to create computational representations of the human body, at different scales. At the macroscopic scale, I am interested in learning quantitative descriptions of organ shapes and functions, together with their normal and pathological variations in the population; for example, learning the manifold of the brain anatomy or the manifold of the brain activity during certain functional tasks. At the microscopic scale, I am interested in studying cells shapes and functions, and at the nanoscopic scale, molecular shapes and functions. My goal is to leverage these heterogeneous multiscale representations to implement new computer-assisted diagnosis methods and develop innovative treatments for diseases.
I'm a lecturer at the Data Science Institute at Columbia University and author of the O'Reilly book "Introduction to machine learning with Python", describing a practical approach to machine learning with python and scikit-learn. I am one of the core developers of the scikit-learn machine learning library, and I have been co-maintaining it for several years. I'm also a Software Carpentry instructor. In the past, I worked at the NYU Center for Data Science on open source and open science, and as Machine Learning Scientist at Amazon. My mission is to create open tools to lower the barrier of entry for machine learning applications, promote reproducible science and democratize the access to high-quality machine learning algorithms.
Robert Nishihara is one of the creators of the Ray, a distributed system for scaling Python applications to clusters. He is one of the cofounders and CEO of Anyscale, which is the company behind Ray. He did his PhD on machine learning and distributed systems in the computer science department at UC Berkeley. Before that, he majored in math at Harvard.
Thinking Machines Data Science
Ardie Orden is a Geospatial Analyst at Thinking Machines Data Science, a Philippine data science consultancy that helps people make data-driven decisions. He has worked with UNICEF, Qatar Computing Research Institute, Humanitarian OpenStreetMap Team, and different companies in Southeast Asia in order to help solve research and business problems that involve geospatial data. His research interests include GIS, deep learning with satellite images, data science for social good, and AI for social good.
Matthew is an open source software developer in the numeric Python ecosystem. He maintains several PyData libraries, but today focuses mostly on Dask a library for scalable computing. Matthew worked for Anaconda Inc for several years, then built out the Dask team at NVIDIA for RAPIDS, and most recently founded Coiled Computing to improve Python's scalability with Dask for large organizations.
Matthew has given talks at a variety of technical, academic, and industry conferences. A list of talks and keynotes is available at matthewrocklin.com/talks.
Matthew holds a bachelors degree from UC Berkeley in physics and mathematics, and a PhD in computer science from the University of Chicago.
Website: Blog: Dask:
University of Washington
Jacob Schreiber is a post-doc and IGERT big data fellow in the Genome Science department at the University of Washington. Previously, he was a graduate student in the Computer Science and Engineering Department. His research focus is on the application of machine learning methods, primarily deep learning ones, to the massive amount of data being generated in the field of genome science. His research projects have involved predicting the three dimensional structure of the genome using convolutional neural networks and learning a latent representation of the human epigenome using deep tensor factorization. These projects typically involve hundreds of millions to billions of samples, making them markedly "big data" problems. He routinely contributes to the Python open source community as the core developer of pomegranate, a package for flexible probabilistic modeling, apricot, a package for data summarization for machine learning, and in the past as a core developer for the scikit-learn project. Future projects include one day being able to go outside again
Henry Schreiner is a Computational Physicist/Research Software engineer in High Energy Physics. He specializes in the interface between high-performance compiled codes and interactive computation in Python, in software distribution, and in interface design. He has previously worked on computational cosmic-ray tomogrophy for archeology and high performance GPU model fitting. He is currently a member of the IRIS-HEP project, developing tools for the next era of the Large Hadron Collider (LHC).
Stanley Seibert is the senior director of community innovation at Anaconda and also contributes to the Numba project. Prior to joining Anaconda, Stan was chief data scientist at Mobi, working on vehicle fleet tracking and route planning. He has more than a decade of experience using Python for data analysis and has been doing GPU computing since 2008. Stan received a Ph.D. in experimental high-energy physics from the University of Texas at Austin and performed research at Los Alamos National Laboratory, University of Pennsylvania, and the Sudbury Neutrino Observatory.
Department of Astronomy, University of Washington
Brigitta Sipőcz is a DIRAC Fellow at the Data Intensive Research in Astrophysics and Cosmology Institute at the University of Washington. She is an astronomer by training, and occasionally still observes the night sky, but she currently spends most of her time developing and maintaining open source astronomy software. She has recently become the lead developer of astroML, and also maintains several packages in the Astropy Project. She also has a keen interest in finding ways to promote the open development model and make tools more sustainable. She is a fellow of the Software Sustainability Institute.
Christine Smit works at NASA's Goddard Earth Sciences (GES) Data and Information Services Center (DISC). She works on Giovanni, a tool designed to make the path between data and science simpler so you don't have to be an expert to use remote sensing data. Her current work involves migrating Giovanni to the cloud with Amazon Web Services. She enjoys the challenge of working with large, scientifically relevant datasets.
Adam Thompson is a Senior Solutions Architect at NVIDIA and the creator of cuSignal – a GPU accelerated signal processing Python library housed under the RAPIDS Open GPU Data Science project. Adam’s technical interests involve signal processing, applications of deep learning to radio frequency data, high performance computing, and data compression. He holds a Masters degree in Electrical and Computer Engineering from Georgia Tech and a Bachelors Degree from Clemson University. You can find him tweeting about signals, GPUs, indie music, and baking at @adamlikesai.
Jozef Stefan Institute
Horacio Vargas Guzman is a lead scientist at the Jozef Stefan Institute in Slovenia. He works on the theoretical development and implementation of analytical and computational methods for exploring new material properties, with special interest in physical phenomena of soft matter and biological inhomogeneous systems. One of his recent developments is a computational multiscale method for efficient calculations of free energy landscapes of virus capsids, as a function of the environment-mediated variables, like pH or the ionic strengths of the buffers where the viruses are located. Previously, he was appointed as a postdoctoral researcher at the Max Planck Institute of Polymer Research in Germany focused on the development and assessment of new multiscale algorithms for molecular simulation. In 2014 he obtained his Doctor's degree in Condensed Matter Physics and Nanotechnology from the University Autonoma of Madrid. The topic of his PhD was focused on the development of theoretical models for the peak interaction forces in Dynamic Atomic Force Microscopy techniques. He has presented his published work in several scientific conferences in Europe and America, mainly devoted to Soft Condensed Matter Physics and Nanotechnology, among other contributions and invitations to scientific computing workshops and tutorials related to physical systems simulations with HPC, C/C++, Fortran and Python. I am also a Software Carpentry Instructor and avid user of the scipythonic programming stack.
Jake VanderPlas was formerly both the Director of Open Software and the Director of Research in Physical Sciences for the eScience Institute; he now works at Google. He comes from a background of machine learning and data-intensive astronomy and astrophysics. Along with his astronomy and computing research, Jake is interested in encouraging open and reproducible research practices across scientific disciplines. To this end, he has been very active in the world of open source Python. He is a core maintainer or regular contributor to several well-known Python projects, including Scikit-learn, Scipy, Matplotlib, and IPython. He has authored several packages focused on promoting open and reproducible science, including astroML, scidbpy, mpld3, and others. He has been involved on the program and review committees for a number of well-known Python conferences, including the PyCon, SciPy, and PyData series.
He is a co-author of the Python-powered text Statistics, Data Mining, and Machine Learning in Astronomy, and has presented many tutorials on the subject. He occasionally blogs about Python, Machine Learning, Visualization, and related topics at http://jakevdp.github.io.
I am involved in a number of open source projects, the biggest are xtensor and xsimd. I also work a lot on technologies in the Project Jupyter space. I did my master studies in Robotics, Systems and Controls and still use ROS on a daily basis.
I am currently employed at a small & very fine software consultancy in Paris, called QuantStack where we are lucky to work on mostly open source software.
Poster Presenters & Authors
Indian Institute of Technology Mandi
Shreyas Bapat is a final year undergraduate student at Indian Institute of Technology Mandi majoring in Electrical Engineering. His research interests include deep learning, performance, computational astrophysics and programmable matter. He helped form the Computational Astrophysics group in IIT Mandi, which develops algorithms for solving complex physical/observational problems in Astrophysics and Astronomy and related datasets.
Shreyas also maintains EinsteinPy, is a prominent contributor to poliastro, pytorch-lightning. He is a Managing Member of Python Software Foundation.
Alex Bozarth is a software engineer at the IBM Center for Open Source Data and AI Technologies (CODAIT). He is a PPMC for Apache Livy, a part of the Apache Spark ecosystem. He is currently active in the JupyterLab project and Elyra, a suite of JupyterLab extensions.
Github: @ajbozarth Twitter: @stbando
Indian Statistical Institute Kolkata
A final year undergraduate student working as a researcher at Indian Statistical Institute Kolkata on hyperspectral imagery applications using Computer Vision approaches. A part-time Data Science instructor. Former summer research intern at Indian Space Research Organisation. Have experience in AI applications on Synthetic Aperture Radar images. Currently a Google Summer of Code Mentor 2020 for OpenMined.
KLS Gogte Institute of Technology
Mr. Gajendra Deshpande holds a master's degree in Computer Science and Engineering and working as Assistant Professor at the Department of Computer Science and Engineering, KLS Gogte Institute of Technology, Belagavi, Karnataka, India. He has a teaching experience of 11 years and a Linux and Network Administration experience of one year. Under his mentorship teams have won Smart India Hackathon 2018 and Smart India Hackathon 2019. He is the Technical Director for Sestoauto Networks Pvt. Ltd. and Founder of Thingsvalley. His areas of Interest include Programming, Web Designing, Cyber Security, Artificial Intelligence, Machine Learning, Brain-Computer Interface, Internet of Things, and Virtual Reality. He has presented papers at NIT Goa, Scipy India 2017 IIT Bombay, JuliaCon 2018 London, Scipy India 2018 IIT Bombay, Scipy 2019 USA, PyCon FR 2019 Bordeaux France, and IIT Gandhi Nagar.
University of Texas at Austin
Fan studies the organizing, sustainability, and the evolution of scientific software ecosystem.
Los Alamos National Lab (LANL)
Currently, I am a Post Bachelor Researcher at Los Alamos National Lab's Theoretical Division working on Atmospheric testing and modeling. I will be continuing my education at the University of Iowa Doctoral Program in Computer Science starting in the Fall of 2020. While I have a wide range of interests from Machine Learning to Parallel Programming, it is all centered around data analytics. My goal is to be able to work on developing and expanding data analytic techniques in order to find solutions to new challenges.
Pradipta Ghosh is Software Engineer at IBM. He works on Distributed Machine learning frameworks Development. His work also involves enabling and optimizing Open Source packages for AI infrastructure on Power and other platform. Prior to that, He significantly contributed to the areas like - File system, block device driver.
William Horton is a Senior Software Engineer on the AI Services team at Compass, a tech-powered real estate brokerage. He works on applying machine learning to improve real estate agents' productivity for tasks like contact organization and outreach, and to streamline the home buying process with easier inventory discovery and personalized recommendations.
Sangeeth Keeriyadath is an Advisory Software Engineer at IBM, where he works on the design and development of distributed machine learning frameworks. His work also involves optimizing open-source packages for IBM Power servers. In his prior role he was a technical lead in IBM PowerVM, and has made significant contributions to device drivers for virtual adapters (vSCSI / NPIV), Shared Storage Pools, Live Partition Mobility, and Flash Caching.
ARK Information Systems
Master of Engineering at Tokyo University of Science in 2012. Software Engineer at ARK Information Systems from 2012.
West Health Institute
Health Institute in La Jolla, a nonprofit medical research organization. Dr. Lu earned his PhD in 1998 from the Electrical and Computer Engineering Department at the University of California, San Diego after receiving SM and SB degrees from the Massachusetts Institute of Technology.
Dr. Lu has been doing machine learning for over 25 years. His interests include machine learning, interactive visualization, data imputation/anonymization, and computing infrastructure.
Prior to joining West Health, Dr. Lu was involved in several startups using Python as the core infrastructure for applications such as ecommerce, network communications, and digital animation.
Prabhu Ramachandran has been a faculty member at the Department of Aerospace Engineering, IIT Bombay, since 2005. His research interests are primarily in particle methods and applied scientific computing. He has been active in the FOSS community for more than a decade. He co-founded the Chennai Linux User Group in 1998 and is the creator, and lead developer of Mayavi. He has contributed to the Python wrappers of the Visualization Toolkit. Prabhu has a Ph.D. in Aerospace Engineering from IIT Madras. He is an active member of the SciPy community as well as a member the Society for Industrial and Applied Mathematics and a nominated member of the Python Software Foundation.
West Health Institute
Dr. Jose Unpingco is currently the Senior Director for Data Science/Machine Learning at West Health Institute in La Jolla, a nonprofit medical research organization. Dr. Unpingco earned his PhD in 1997 from the Electrical and Computer Engineering Department at the University of California, San Diego.
Prior to Joining West Health, Dr. Unpingco worked with the SSC Pacific High Performance Computing Center as an on-site Director for the DoD High Performance Computing Modernization Program (HPCMP) in the PETTT component of HPCMP where he helped develop large scale file transfer technology that is still used today, as well as encouraging the DoD to adopt open-source technology such as Python for scientific computing.
In addition to his work at SSC Pacific, Dr. Unpingco has extensive industrial experience as a research engineer and technical director at Hughes Aircraft Co., Raytheon, Mission Research, and ATK, working on a wide range of systems -- underwater acoustics, adaptive antennas, radar detection and imaging, and modern target tracking.
Dr. Unpingco is the author of two internationally published books by Springer titled “Python for Signal Processing” and “Python for Probability, Statistics and Machine Learning.” In addition to his duties at West Health, Dr. Unpingco is an invited lecturer at UCSD, teaching undergraduate/graduate Data Science classes. He also sits on the industry advisory council for UCSD Extension's Data Science and Machine Learning program.
University of Georgia
I’m a PhD candidate at the Institute of Bioinformatics at the University of Georgia. I’m advised by Edison Arthur S, Arnold Jonathan, and Heinz-Bernd Schüttler. I graduated from Nanjing University, where I worked on phylogenetics at the lab of Jianqun Chen.
Modeling explanation on biological phenomena has been the main topic of my research. My recent work includes extraction and analysis of time series metabolism, automatic analysis of time series NMR, and new framework for modeling metabolic dynamics. My work also involves MCMC, neural network, and time series analysis.