Hristo Spassimirov Paskov

ThinkFast: Scaling Machine Learning to Modern Demands

Founder and CEO, ThinkFast Mathematical Intelligence Corporations
Intel Software Innovator for Artificial Intelligence

@IntelSoftware

Abstract

Machine learning has revolutionized the technological landscape and its success has inspired the collection of vast amounts of data aimed at answering ever deeper questions and solving increasingly harder problems. Continuing this success critically relies on the existence of machine learning paradigms that can perform sophisticated analyses at the data scales required by modern data sets and that reduce development cycle times by improving ease of use. The evolution of machine learning paradigms shows a marked trend toward better addressing these desiderata and a convergence toward paradigms that blend “smooth” modeling techniques classically attributed to statistics with “combinatorial” elements traditionally studied in computer science.

These modern learning paradigms pose a new set of challenges that, when properly addressed, open an unexpected wealth of possibilities. I will discuss how ThinkFast is solving these challenges with fundamental advances in optimization that promote the interpretation of machine learning as a more classical database technology. These advances allow us to scale a variety of techniques to unprecedented data scales using commodity hardware. They also provide surprising insights into how modern techniques learn about data, including a characterization of the limits of what they can learn, and ultimately allow us to devise new, more powerful techniques that do not suffer from these limitations.

Bio

Hristo was born in Sofia, Bulgaria and grew up in Westchester, New York. He earned his bachelor of science and master of engineering in computer science at MIT, and a PhD in computer science at Stanford where he was advised by Trevor Hastie and John Mitchell. ThinkFast is based on a new, massively scalable machine learning and optimization paradigm that Hristo developed towards the end of his PhD. This paradigm substantially decreases the time necessary to develop novel machine learning models that are customized to harness important expert knowledge and qualitative properties of big data. Its efficiency allows expensive server and GPU clusters to be replaced with cheap commodity hardware. The resulting machine learning products are backed by rigorous statistical guarantees, making them suitable for real-time, mission-critical applications within defense, healthcare, finance, and commerce.

back to top