📍 Protein Folding with AlphaFold – On November 30th results from the biennial protein-structure prediction challenge Critical Assessment of Structure Prediction (CASP) were announced. DeepMind won with a model called AlphaFold 2, achieving a dramatic leap forward in one of biology’s grand challenges.
In the CASP competition, prediction accuracy is measured in the Global Distance Test (GDT), where scoring ranges from 0 to 100. AlphaFold 2 achieves a median value of 92.4 across all targets, where a value of around 90 compares with the results from experimental methods. At present, protein structures are experimentally probed in a very time-consuming process. AlphaFold's predictions could replace months of experiments with only a few hours of computing time, which would drastically accelerate e.g. drug development. – Read more on nature.com and deepmind.com.
📍 Protein Folding Explained – DeepMind produced a 2-minute introduction into protein folding which you can watch here on youtube.com
💡 1. Use Cases – A South Korean TV station introduced an AI-powered news anchor. The virtual TV host "Ai Kim" was trained on 10 hours of video footage from her real life counterpart and is now able to read the news indistinguishable from the original (joins.com).
🤯 2. Mind Blowing – Researchers from Washington University and Google introduce a method to turn casually captured selfie photos/videos into photorealistic 3D models of the subject. Watch the Deformable Neural Radiance Fields (D-NeRF) in action here (youtube.com).
💡 3. Use Cases – In times of the pandemic, disinfecting work places and public spaces has become more important than ever. Autonomous cleaning robots can independently operate in increasingly complex office spaces (theverge.com).
📖 4. Papers – The paper "An Image is Worth 16x16 Words" introduces the Transformer Architecture to Computer Vision. It outperforms state-of-the-art CNN architectures in Image Classification tasks. Yannic Kilcher goes through the paper here (youtube.com).
💭 5. Articles – State-of-the-art AI research requires an ever increasing amount of computing resources, causing both entry difficulties for students and researches and a high carbon footprint. Roy Schwartz denotes this trend as Red AI and calls for the research community to focus more on efficiency rather than accuracy (acm.org).
📖 6. Papers – A collection of the 10 most important NLP papers of 2020, including our favorites T5 and the Reformer paper (topbots.com).
💭 7. Articles – Previously, we we shared our general AI Expert Roadmaps. One reader added to that and recommended this NLP learning path for those focussing on Language Processing (analyticsvidhya.com).
💡 8. Use Cases Using a dataset with nearly a year's worth of data taken from a previously unseen number of 11,160 sensors along the large California highway system, the US Argonne National Laboratory improved their traffic forecasts dramatically (techxplore.com).
📅 December 3 (online) – AIxIA: AI Conference 2020 The German-French AI conference previously hosted in Karlsruhe has gone virtual this year – Register at aixia.eu.
📅 December 6 - 12 (online) – NeurIPS 2020 The thirty-fourth Conference on Neural Information Processing Systems. The virtual schedule is available at nips.cc.