π Β New Year, New Look β The AI Consulting team from AMAI launched a new website. With a fresh look you can find live in-browser demos of text anonymization and visual segmentation. An updated careers page is accompanied by a newly started AI Expert developer blog.
π Β Leaking Language Models β A collaboration between academic institutions and corporations highlights privacy concerns about large language models. In the "Extracting Training Data from Large Language Models" paper (arXiv) researchers from Google, OpenAI, Apple, Harvard, Stanford and others demonstrate a training data extraction attack. Although they only have black-box access (inputs and outputs, but no intermediate layers) to the model they are able to retrieve potentially sensitive information. One example is laid out below.
Read more about this on the Google AI Blog or more extensively in this video by Yannic Kilcher.
π Β 1. Miscellaneous β On a rather humorous note, this comic was drawn by XKCD two years ago. Given the paper above, this fits particularly well with the security concerns regarding large language model.
π‘ Β 2. Use Cases β In 2019 DeepMind introduced MuZero, which can achieve superhuman performance in tasks such as Chess, Go, and Atari Games, without knowing any underlying rules beforehand. After the recent publication in Nature, DeepMind's principal research scientist David Silver sat down with the BBC to discuss future applications. Now they have tasked MuZero with video compression, aiming to invent new ways to reduce the data footprint of video, which makes up the majority of internet traffic.
π Β 3. Opinion β Yann LeCun, Chief AI Scientist at Facebook, reflects on the unrealistic expectations some people hold about large-scale language models such as GPT-3. In his Facebook post he writes:
"[Trying] to build intelligent machines by scaling up language models is like building a high-altitude airplanes to go to the moon. You might beat altitude records, but going to the moon will require a completely different approach."
π Β 4. Education β The interactive cheatsheet from Stanford's CS229 Machine Learning course is great for learners and those trying to understand ML terminology.
π Β 5. Papers β Data-efficient image Transformers (DeiT) is a new method to train computer vision models that leverage Transformers. The method requires less data and far less computing resources to produce state-of-the-art image classification models. Read more in the Facebook AI Blog and the paper on arXiv.
π©βπ» Β 6. Code β Animate the faces of other humans, Muppets or Nefertiti with your webcam in real-time. Using the First Order Motion Model, images of others can be matched to your movements. Try it right in your browser with this Colab notebook by Eyal Gruss. Read more on the First Order Motion Model in this Towards Data Science article.
π Β January 5, 17:00 CET (online) β This afternoon Louis Dorard gives a Live demonstration of his Machine Learning Canvas, a framework used at AWS, BlaBlaCar and UCL. β Register on crowdcast.io.
π Β January 15-16 β Global AI Bootcamp is a free one-day event organized across the world by local communities. As of now 119 locations worldwide are organizing the Microsoft-sponsored event (most online) β Find your local community at globalai.community.
π January 21, 17:00 CET (online, German) β KI im Unternehmen: Wie und wo fange ich an? Our colleague Woldemar Metzler sheds light on the particular challenges German "MittelstΓ€ndler" companies face when approaching AI. He shows first steps SMEs can take to increase competitiveness in their future with Artificial Intelligence applications. β Free admission here on digitalhub-nordschwarzwald.de.