Yes, you read that correctly. Big pharma is teaming up with companies like Palo Alto based twoXAR to leverage artificial intelligence (AI) software to identify new drug candidates. Asian Liver Center of Stanford worked with twoXAR to screen 25,000 potential candidates for liver cancer. In four months, they came up with a treatment that’s headed for human trials. In comparison, the only FDA approved treatment for the same cancer took five years to get to the same point! Given the industry average $2.6B price tag to bring a new drug to market, the potential for AI to save drug development costs is catching a lot of attention. (read “Supercomputers Are Stocking Next Generation Drug Pipelines” at Wired Magazine)
So what’s going on?
Artificial Intelligence (AI) took a big leap forward in 2006 when scientists started using deep neural networks (DNNs) to recognize patterns in data. DNNs are behind Google’s driverless car and more recently, their translation bot which translates text from one language to another (such as Chinese to English) at a level nearing human ability.
What’s lesser known is how the deep neural networks are being used to create new content. Feed hours of music to a network and it can compose a new piece that sounds like something written by Chopin (listen) or Bach. (listen) Feed a Van Gogh painting and a photo you took into a DNN, then watch it produce a new work of art you might think was a real Van Gogh. (try yourself at DeepArt.io)
Its seems almost magical that computers can generate new ideas. I’ll cover two deep neural network techniques in use today to do this.
- Recurrent neural networks (RNNs)
- Style transfer
Recurrent Neural Networks (RNNs)
Predicting an outcome sometimes requires looking at a trend rather than data at a single point in time. For example, you probably want to look at a stock’s price over a course of a month or year when predicting its likely value tomorrow. Time series data like this is what RNNs are designed to understand.
To train an RNN on music by Bach, you first break up every Bach composition into an ordered set of notes. For each training cycle, you feed the network a note, along with the next note in the song. Do this enough times and the network eventually learns that if it sees a B#, it knows what the next 5 most likely notes are. As simple as this sounds, its surprisingly effective.
To produce new music, simply give the network a starting note, and let the RNN predict the next note. Feed that note back into the RNN, and you’ll get the third note out. Do this over and over and soon you’ll have an entire composition. (there’s a little more to it, but you get the idea). To learn more about RNNs, read the article “The Unreasonable Effectiveness of Recurrent Neural Networks” by Andrej Karpathy.
In the case of graphic style transfer, this approach starts out with a pretrained DNN that already understands how to detect shapes and objects in images. As it happens, getting your hands on just such a network isn’t hard. You can use VGG16 (from the Oxford team that won the ImageNet competition in 2014) or Google’s Inception network. The next part is a little tricky, but here goes.
We combine two images – a piece of art like Van Gogh (image A) and our photo (image B). Our goal is to retain the “style” from image A, while retaining the content of image B. There is a lot of math that I won’t go into here, but suffice it to say, its very possible to do this. To learn more about this technique, watch the 9 minute video by Siraj Raval, titled “How to Generate Art – Intro to Deep Learning #8”.