In June of 2019, we released the beta version of Runway. Propelled by an avalanche of open research in computer vision and generative networks, we launched with one goal in mind: build a platform that helps creatives use and make sense of machine learning.
Since then, Runway has been used by a range of creative professionals that includes filmmakers, designers, VFX and CGI professionals, artists, coders, musicians, students, and educators. We’ve helped them understand and incorporate machine learning models into their workflows. Runway has also catalyzed discussions over the role of generative modeling in architecture, AI-driven techniques in industrial design, the role of synthetic media in content creation, and the automation of filmmaking, among other topics.
Today, Runway is somuch more than a general-purpose platform for AI. Creatives come to Runway to accomplish things that are impossible to achieve anywhere else. From video-driven high-resolution animations to natural text generation, artificial intelligence brings automation at every scale,introducing dramatic changes in how we create. To fully support this new creative paradigm, we need an entirely new creative suite, one that’s fundamentally different from the outdated creative tools we use today, which are based on old media paradigms and ancient distribution channels. By bringing AI closer to creators we’ve laid the foundation to develop a meaningful and radical change in how we create. Now, our goal is to build the next generation of creative tools, allowing more creatives to do impossible things.
The new creative standard.
There are three forces driving our mission to bring AI into everyday creative processes:
- Generative Machine Learning: Deep neural networks for image and video synthesis are becoming increasingly precise and realistic. In just a couple of years, we’ve gone from 128x128 blurry black and white images to high-resolution photorealism allowing for the rise of synthetic media. At the same time, neural rendering techniques are bringing computer graphics up to date with the latest developments in deep learning. This means faster and more precise rendering techniques, more accessible graphics pipelines, and automated tools that are easier to use, such as our new Green Screen tool. These advancements make one thing increasingly clear:the gap between content created by Hollywood and a TikToker is becomingrapidly smaller.
- New Distribution Channels: We not only consume faster than before, we consume it differently. The rise of platforms like YouTube, Twitch, TikTok, Patreon, and Substack has given birth to a new generation of creators, necessitating a new generation of creative tools. AI-powered tools will dramatically reduce the cost of creating content for independentcreators, studios, and brands looking to monetize their distribution channels.
- The Web: The future of media creation should be collaborative and accessible; for that reason, it needs to happen on the web, and be based on open web technologies. Web and cloud-native creative applications are scarce but growing fast. The rise of accelerated graphics and machine learning frameworks on the web — with WebGL, WebAssembly, and soon with WebGPU — will catalyze a revolution in real-time graphics and machine learning on the browser. Furthermore, cloud processing significantly reduces the need for sophisticated hardware to run large deep learning models, broadening the accessibility of machine learning.
We are witnessing a significant transformation in the way we produce content. Fueled by generative machine learning, substantial distribution changes, and the accessibility of the web, the future of creative tools looks very different from the current ecosystem. We’re proud to be building a platform that can lead the way of the transformations taking place across all creative industries.
Creative Expression
We love finding ways of expressing ourselves. We love creating things. Yet, sometimes our tools fail at helping us be creative by being slow, complex, and expensive. In this sense, every tool introduces a worldview. In Runway’s worldview, the gap between idea and execution should be instantaneous and magical, like Clarke’s third law. A tool should be simple, help set your brain free, and allow you to concentrate on your creative vision. Let’s never forget computers are like a bicycle for the mind.
Making intuitive interfaces that feel magical is only useful if it translates into value for creators. So far, users in Runway have trained more than 50,000 AI models and uploaded around 24 million assets to the platform. Our active and growing community includes designers and creative teams at IBM, Google, VMY&R, R/GA, and New Balance and independent musicians such as YACHT, who used Runway in their Grammy-nominated Chain Tripping. Among educators, Runway is being incorporated into the design curriculum at RISD, UCLA, NYU, and MIT, among other institutions. Community workshops using the platform have popped up across the US, New Zealand, Australia, Netherlands, Germany, Spain, Denmark, England, Peru, Colombia, Chile, and Mexico. Inventing the future of creative tools requires recognizing machine intelligence’s potential as a tool for human expression through media creation. To do that, we need to work closely with the creative communities we’re serving.
The technical and historical nature of established creative software for image and video — After Effects, Photoshop, Final Cut Pro, and Premiere — cannot sustain the new media syntaxes that artificial intelligence brings to content creation. At Runway, we are pioneering image and video creation techniques and inventing interfaces for powerful synthetic manipulation and editing. Using text to edit video, using video to create video. Applying technology that didn’t exist until very recently. We are working to predict-by-inventing the future. Runway is reimagining how we create, so we can create impossible things.