Machine learning, faster

Does speed matter in machine learning?

I remember once speaking with a machine learning researcher who worked at a large company. He told me that a product team had approached him with a very exciting idea that had to do with text summarisation. He started looking into the problem and made some very significant contributions over the course of 12 months — going so far as publishing papers in top-tier conferences about the topic. I asked him if his ideas made it into the product in question. Unfortunately, the answer was no: by the time his research was completed, the product team had moved on from this problem, and weren’t interested in having the solution anymore 😭.

🚢 Quickly deploying models to production

Quickly deploying models to production is one of the biggest roadblocks for impactful machine learning. In many companies, this boils down to who is trusted to do this work; often, ‘Scientists’ design and train a model, and then hand it over to ‘Engineers’ to put it into production. This implicitly develops a “throw it over the wall” mentality: people who train models do not have to think about how complex it would be to ship, and folks who ship models can throw a model back over the wall if it’s not behaving how it should. The most common complaint I’ve had from Scientists is along the lines of “I trained this model months ago and I’m just sitting here waiting for it to be shipped,” and Engineers retort that they get no recognition for doing all of the hard work of enabling production inference to actually happen. A frustrating experience all around.

🔍 Quickly validating misbehaviours

Last year, after a re-launch of our help screen search system, a very common question that we would get from product teams is “when I search for X, why doesn’t article Y show up?” Trying to explain machine learning algorithms is hard enough - diagnosing minor misbehaviours felt even more challenging. Has something gone wrong with our data pipeline, our model, or how we are post-processing the results down stream?

♻️ Quickly repurposing models for new problems

Last year, Monzo went through a challenging period where non-urgent response times for customer support was in the order of days rather than hours. The entire company rallied behind this: engineers, designers, and lawyers all dropped what they were doing to respond to customers.

⏰ Measuring time-to-results, not results

The systems mentioned above use an encoder architecture that was published in 2017 (based on the Attention is all you need paper). While working on it, we felt that, overall, we were taking advantage of latest research when building our systems. Then, in 2018, deep pre-trained language models (like ELMo, ULMFit, BERT) appeared and started to take the top spot in a variety of research challenges — and many of them were open sourced. Every few months the state-of-the-art was changing. As a non-research team, that focuses on building systems, how could we keep up with this pace of research?

⬇️ Conclusions

There’s a famous quote that I’ve often heard: “to increase your success rate, double your failure rate” (it looks like Thomas Watson said this). This is as true in machine learning as it is anywhere else.

  • Speeding up Machine Learning Development April 2019, Xcede Data Science Networking Event. London.
  • Using Deep Learning to Support Customer Operations March 2019, ReWork Deep Learning in Finance Summit. London.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store