With Remotion v4.0.130, Remotion Lambda renders now complete significantly faster!
The longer the video, the higher the speedup.
Rendering a 1920x1080px video with an <OffthreadVideo> tag loading a 15MB Full HD video and looping it. The Lambda function has been given 3000MB of memory.
Thanks to an innovative audio concatenation strategy that we implemented with help from Max Schnur from Wistia, we can skip an audio re-encoding step at the end.
Remotion Lambda renders portions of a video in parallel and concatenates them at the end.
The slow part is actually not the video, but the audio rendering!
While other codecs are possible, a .mp4 file usually contains AAC audio.
Concatenating the AAC chunks is usually not possible without creating some artifacts.
Online resources such as Stack Overflow were quickly exhausted.
I went to the RTC.on multimedia conference and talked about this problem in my talk.
A few listeners came up to me afterwards and gave me a few ideas. I did a session with Michał Śledź from Software Mansion, all of which helped me even understand the problem we are facing in the first place.
No immediate solution was found, but the problem was put aside when we realized that the libfdk-aac encoder is twice as fast as FFmpeg's native encoder, softened the problem for the moment.
Each audio segment needs to have a sample count divisible by 1024
Each segment should have extra packets at the beginning and the end to not lose the keyframes.
The extra padding will be removed when concatenating them together.
There are many tricky aspects to implementing this correctly:
Each audio segment needs to be a bit longer than previously and they need to be slightly overlapping. Extra frames need to be evaluated, but they don't need to be screenshotted.
Video chunks should not contain the padding, hence audio and video need to be separated.
Depending on the rounding and the position in the audio, between 1 and 3 extra packets are required.
The inpoint and outpoint FFmpeg filters need to be nano-second precise for correct trimming.
All audio layers should be resampled to be the same sample rate (we decided for 48000 Hz)
Our timeline positioning, volume curves, pitch correction, playback rate capabilities need to continue working.
Rendering only a portion of the timeline will shift the start timestamp, which changes the math for each chunk.
The video frame rate of (commonly 30fps) and the audio sample rate of (commonly 48000Hz) do seldomly align. By capturing extra frames, we get too much padding which needs to be trimmed off again for each chunk.
The FFmpeg atempo filter is imprecise: For example, speeding up 80.000 audio samples by 2x will lead to 40.014 audio samples. Tiny imperfections will lead to seamless concatenation not working at all. To fix this, we had to flip the order of trimming and speeding up audio.
Not pictured above, each AAC file has a silence of 512 samples at the beginning of the file. This delays all audio slightly, but by adding a negative offset to the MP4 container, this is usually balanced out.
If all of these factors are accounted for, concatenating AAC chunks will be completely seamless!
Upgrade to Remotion 4.0.130 or later to benefit from the faster rendering on Lambda.
We look forward to engineer even more performance improvements in the future for lower costs and better user experience!
Use npx remotion bundle to export the Remotion Studio as a static website. Enter this build command on Vercel, Netlify or another provider to continuously deploy the Studio.
You may also deploy the Remotion Studio as a long-running server, for example on Fly.io or Render.com.
The advantage is that the Render Button stays active, meaning you can render and download videos!
We added handy features to the Remotion Preview - as a result, it's more than just a preview! Therefore, we renamed it: Say hello to the Remotion Studio.
The props of a composition can now be defined as a Zod schema.
Doing this will not only make your defaultProps typesafe, but also allow you to edit them in the Remotion Studio.
Edit numbers, strings, arrays, objects, enums and dates directly using a graphical interface. Even nested data structures can be visualized and edited.
Once you are happy with the changes you've made, you can even save the props back to your project. This works with arbitrary JSON data.
Instead of typing in a CLI command, you can now simply press a button to render an asset.
A graphical interface allows you to discover and tweak all options of a render. You can follow the progress of a render in the Remotion Studio, queue multiple renders, and reveal the output in the file explorer.
Every render triggered through the UI is also trackable in the CLI as usual and synchronizes to other instances of the Remotion Studio.
Failed renders show the stack trace and allow for retries with the same configuration.
Edited props in the Remotion Studio can be used to render a video using the Render Button as well - which means you can now render a parameterized video by filling out a form and not having to touch any code.
Installing FFmpeg is now superfluous, as each Remotion project comes with a tiny version of FFmpeg baked into it.
We eliminate the our burden of having to support multiple versions of FFmpeg, and you don't have to worry about installing it anymore.
We ship a custom build of FFmpeg 6.0, which is much smaller than a version that you would download. On Lambda, it decreases the cold start time of your functions.
We also get access to the low-level C API that allows us to do things that were not possible before.
The <OffthreadVideo> component is the preferred way to embed an existing video into a Remotion project.
While previously, frames were extracted using the FFmpeg CLI, we now use the FFmpeg C API to extract frames. Because we can keep the video open between extractions, this is much faster than before
Unnecessary redundant decoding work can now be skipped, which makes the component up to twice as fast during rendering!
We are introducing a new calculateMetadata() API for the <Composition> component. It helps if you want to:
1
Adjust the duration or resolution based on the props of the React component
2
Perform data fetching before the video renders
3
Precalculate props before mounting the React component
To demonstrate the possibilities of the new API, we made a new section in the docs entirely dedicated to data-driven videos. See: Parameterized rendering.
All of our templates have been upgraded to use Remotion 4.0. Many of them make use of the new features, for example the popular Text-to-speech template allows you to customize the text and voice, and the template will automatically adjust the duration of the video to match.
We also introduce two new templates: Text-to-speech (Google) which is an alternative to the Azure TTS template, as well as [https://www.remotion.dev/templates/stargazer] which is a popular template for celebrating GitHub star milestones and can now be initialized using npm init video.
Now, when you are rendering a video and don't have FFmpeg installed, Remotion will download a copy for you.
Previously, installing FFmpeg required 7 steps on Windows and took several minutes when using Homebrew on macOS.
This package offers utilities for animating and manipulating SVG paths! With 9 pure, type-safe functions, we cover many common needs while working with SVG paths:
If your device supports multitouch, you can now pinch to zoom the composition. Alternatively, you can hold Ctrl/Cmd and use your scrollwheel to zoom.
Using two fingers, you can move the canvas around and pressing 0 will reset the canvas. For the latter, there is also a button in the top-right corner that you can click.
When no audio was detected in your video, the audio will now be dropped (except on Lambda). With this new flag, you can enforce that a silent audio track is added.
Using these flags, you can ignore the width and height you have defined for your output, and override it. The difference to --scale is that the viewport and therefore the layout may actually change.
If you add --log=verbose, the slowest frames are shown in order, so you can optimize them. Slowest frames are also available for renderMedia() using the onSlowestFrames callback.
When rendering a still, you may now pass a negative frame number to refer to frames from the back of the video. -1 is the last frame of a video, -2 the second last, and so on.
If a render crashes due to being resource-intensive (see: Target closed), Remotion will now retry each failed frame once, to prevent long renders from failing on low-resource machines.
Previously, the progress for rendering and encoding was reported individually. There is a new field, simply named progress, in the onProgress callback that you can use to display progress without calculating it yourself.
Since getting the progress was less important than some of the options, bundle() now accepts an object with options, progress callback and entryPoint altogether:
The new <Thumbnail> component is like the <Player>, but for rendering a preview of a still. You can use it to display a specific frame of a video without having to render it.
In addition to timeupdate, you can subscribe to frameupdate, which fires whenever the current frame changes. You can use it for example to render a custom frame-accurate time display.
On YouTube, the video always starts with controls shown and then they fade out after a few seconds. We have made this the default behavior in Remotion as well, because users would often not realize that the Player is interactive otherwise. You can control the behavior using initiallyShowControls.
Using the inFrame and outFrame props, you can force the Remotion Player to only play a certain section of a video. The rest of the seek bar will be greyed out.
You can define the initialFrame on which your component gets mounted on. This will be the default position of the video, however, it will not clamp the playback range like the inFrame prop.
In addition to the Preload APIs, prefetch() presents another way of preloading an asset so it is ready to display when it is supposed to appear in the Remotion Player.
The Remix template is our first SaaS template! It includes the Remotion Preview, the Player and Remotion Lambda out of the box to jumpstart you with everything you need to create your app that offers customized video generation.
You can now send and receive a webhook when a Lambda render is done or has failed. Examples for Next.js and Express apps have been added and our documentation page features a way to send a test webhook.
Previously, the input props passed to a Lambda render could not be bigger than 256KB when serialized. Now, this limit is lifted, since if the payload is big, it will be stored to S3 instead being passed directly to the Lambda function.
The new npx remotion benchmark helps you compare different render configurations and find out which one is the fastest. Currently, you can compare different codecs, compositions and concurrency values. Each configuration is run multiple times in order to increase confidence in the results.
We try to avoid jargon, but we have also created a Remotion Terminology page to define some commonly used terms. When using these terms, we will from now link to the terminology page for you to read about it.
The file that was previously called src/Video.tsx in templates is now called src/Root.tsx, because it did not contain a video, but a list of compositions. That component was also renamed from RemotionVideo to RemotionRoot. The new naming makes more sense, because that component is passed into registerRoot().
Thank you to these contributors that implemented these awesome features:
ayatko for implementing the @remotion/google-fonts package
Antoine Caron for implementing the <Thumbnail> component, for reloading the page when the environment variables change and implementing negative frame indices
Apoorv Kansal for implementing the documentation search in the Quick Switcher, the benchmark command and the option to customize Play button and fullscreen button in the Player
Akshit Tyagi for implementing the --height and --width CLI flags
Arthur Denner for implementing the direction property for the Lottie component
Many of these contributions came during Hacktoberfest where we put bounties on GitHub issues. We also want to thank CodeChem for sponsoring a part of those bounties!
We are delighted to announce that we have raised 180'000 Swiss Francs from Remotion users and customers!
With our first funding, we will make it easier for you to programmatically create videos and video apps. We'll introduce new components, templates and tools to help you build more with less code.
The number one feedback that we have heard is that being able to write videos in React is powerful, but simple things can be hard. Fortunately, almost any complexity in React can be abstracted, packaged up, released to NPM and shared with others.
While our low-level primitives will always be here, we will also develop higher-level components solving common needs that people face. This will allow more developers, not just React experts, to use Remotion.
We also encourage our community to create building blocks for Remotion and will sponsor developers as well as help them monetize their work.
With the Remotion Player and Remotion Lambda, we provide APIs that allow you to build apps that produce videos for end users.
We have tons of opportunities to make it easier to build an app with Remotion. We are going to release UI elements, SaaS templates and even best practices for payment integration, so companies can realize Remotion solutions faster and with fewer resources.
We recognize that startups usually raise more money than we do at an earlier stage. At the same time, they are entering a high risk of failure due to running out of money.
With the amount we have raised, we are not only able to continue but accelerate our operation and grow our company license revenue to confidently stay here for a long time.
Remotion is a thriving community of business customers, creative coders, professional Remotion freelancers and indie hackers whose interest is our long-term success. Our aim is to grow in a healthy way together with our community!
To everybody who tried out Remotion, sent a pull request, tweeted about it or filed a bug. It is a huge thrill to see people believe in the ideas that we put out and we are very privileged to able to continue working on them.
Our timeline has some new features that make it behave more like traditional video editors. You can now zoom in and out of the timeline to better focus on a certain section of a video. When playing the video, the timeline moves along with the cursor. Scrubbing with the cursor or keyboard will also scroll the timeline so the cursor is always in the viewport.
The other new timeline feature is that there are now ticks that appear every second, and when zoomed in, smaller ticks that denote the positions of a single frame. This should help you orient yourself when you are asking yourselves at which point of the video you are at.
Improvements to audio-only and video-only rendering
You can now explicitly drop the audio of a video by passing --muted in the render. Videos that include no audio are now faster because we don't include a silent audio track anymore (use --enforce-audio-track to get the old behavior).
Renders that are audio only are now faster because Remotion will not wait for the video tags to seek.
Renders that are only video are now faster because no assets need to be downloaded to be included in the audio track.
Remotion Lambda now has a privacy: "no-acl" option if you are rendering into a bucket that has the ACL feature disabled.
Remotion Lambda now supports a downloadBehavior prop which makes it that when a output file link gets clicked in the browser, it will download instead of play in the browser.
Adding an output filename to the npx remotion render command is not necessary anymore, it will default to out/{composition-id}.{extension} now.
The <Player> has a new moveToBeginningWhenEnded prop that determines if the player moves back to the beginning when the video has reached the end and is not looping.
The <Player> has a new fullscreenchange event that allows you to
New ESLint rule that warns you if you are passing a relative path or remote URL to staticFile: staticFile("../my-file.png") or staticFile("https://example.com")
Better error message on Remotion Lambda when the s3:ListBucket permission for the bucket you are rendering into is missing.
ESLint warning when passing a file ending in .gif to the <Img> component.
Better error message and help page when calling renderMediaOnLambda() inside another serverless function and AWS credentials are conflicting
Better error message and help page when rendering into a bucket that has ACL disabled but you are setting the privacy to public or private.
This release brings support for GIF as an output format, official support for Tailwind and makes springs and sequences easier! Plus we recap the best features from v3.0.1 until v3.0.31! 🎉
To render a GIF instead of a video, pass the --codec=gif flag during a render. We tweaked Remotion's rendering process to adapt for the speciality that are GIFs:
Commonly, a GIF has a framerate in the range of 10-15, and so that you don't have to refactor your video, you can use the --every-nth-frame flag.
GIFs are loopable - using the --number-of-gif-loops flag you have control over the GIFs looping behavior!
You can even render your GIF distributed across many small VMs using Remotion Lambda!
The result will be a spring animation that will take 10 seconds!
Why is this such a game changer? Normally, a spring animation curve is not defined by timing, but by physical parameters. It complicates planning quite a bit, as the duration of a spring is not well-defined. Theoretically, a spring animation is never finished, it keeps on going forever (even though after some time the movement is barely noticeable).
We introduced measureSpring() a while ago which allows you to calculate the duration of a spring by allowing you to set a threshold.
But to change the duration of a spring, you had to change the physics parameters which then in turn change the animation curve!
Until now - if you pass a duration to a spring, we will detect the trajectory of the curve and stretch it so it fits your duration.
This component is an alternative to the <Video> component that extracts frames using FFMPEG and renders them inside an <Img> tag.
We made the <OffthreadVideo> component in order to counteract problems with seeking and throttling that users were reporting with the <Video> tag. The new way improves reliability but has it's tradeoffs - read <OffthreadVideo> vs <Video> or check out our visual explanation on Instagram!
In the preview, go to Tools -> Color picker to trigger an eye dropper that allows you to pick any color from the screen! Only Chrome has this feature enabled for now.
We welcome Patric as our intern! As you can see on our new team page, we are now a team of three and are in the preparations of our first fundraising round.
Patric's first Remotion video!
Remotion won the "Most Exciting use of Technology Award" at React Summit - we owe it all to you!
Going forward, we want to make Remotion even easier to use through new tools, templates and tips!
And wouldn't it be nice if Remotion was faster - I'm exploring multiple options from an alternative concurrency model to a C++ based rendering solution - stay tuned for what's about to come 🚀
After more than 10 months in development and 1400 commits, it feels so good to announce Remotion 3.0!
I am convinced that Remotion Lambda is the best piece of software that I have ever written. It is the final puzzle piece needed to complete our vision: A full stack for developing video apps! Enjoy the changelog, and if you haven't, check out the Remotion 3.0 Trailer.
Remotion Lambda is a distributed video renderer based on AWS Lambda. It is made for self-hosting, so you deploy it to your AWS account. Once your Lambda function is up, you can give it rendering tasks, which it will split up into many small units of work that get processed in parallel by spawning itself many times.
Lambda is the best of all worlds:
Fast: Lambda can render a video up to many times faster than the fastest consumer computers. The longer the video, the higher the speed gain. The Remotion Lambda trailer was rendered in 15 seconds instead of 60 seconds, and a 2 hour video was rendered in just 12 minutes[1].
Cheap: You only pay for when you are rendering. The Lambda functions use ARM architecture for best price-performance efficiency.
Scalable: You can render many multiple videos at the same time. Lambda concurrency limits apply, but can be increased.
Easy: Chromium and FFMPEG are already pre-installed, and we handled all the edge cases. You only need to code your video, follow the steps to deploy a function and invoke a render.
All functionality is available via CLI commands and Node.JS functions. We've written 45 pages of documentation, released over 50 alpha versions to testers, and written many tests from unit to end-to-end. Lambda is mature and used in production by companies like Combo and Jupitrr.
Previously, rendering frames, and stitching them together to a video has been a sequential process where one step can start once the other has finished. In Remotion 3.0, stitching can start while rendering is still in progress! This will result on average in a 10-15% speedup.
Additionally, downloading audio assets now happens earlier in the rendering pipeline and if you rely on remote audio, you should see a handsome speedup as well.
A new function has been added to @remotion/renderer called renderMedia(). It combines already existing functions renderFrames() and stitchFramesToVideo() but takes advantage of the new parallel rendering pipeline. It can render videos as well as audio and requires fewer arguments, so it's a win for speed and ease of use!
We are taking an initiative to make error easier to understand. While much of error handling has been handled by third-party libraries until now, we've inlined the logic, allowing us to streamline it. Minified errors are being symbolicated, we've implemented a new error overlay, and timeout errors are more descriptive. Let us know what you think!
A minified error that happened inside a Chrome browser inside a remote Lambda function displays a proper stacktrace!
Our custom error overlay has the ability to open the troublesome file in your default editor, and look for similar GitHub issues.
In the remotion.config.ts file, you can now import other files. Under the hood, we use ESBuild instead of Typescript to read the file. This was a paint point before: Node.JS APIs don't read from the config file and require you to specify the options explicitly. Configuration such as a Webpack config override could not be shared in a good way between CLI and Node.JS renders so far, which we address with this change.
Keeping our stack modern allows us to move faster and also, eliminate dependencies.
With Remotion 3.0, support for Node 12 is dropped, and we officially support Node 18.
Our ESLint config has been updated to take advantage of ESLint 8, which is now also officially supported.
Read the migration guide to update to Remotion 3.0. The most severe breaking changes revolve around server-side rendering in an attempt to make it faster and simpler. Other than SSR changes and the Node 14 requirement, nothing should break.
[2] Rendering the composition 2hrvideo in the example folder in the Remotion repository with --frames-per-lambda=1080, a Lambda function running on the arm64 architecture with 2048MB RAM, on warm Lambdas in the us-east-1 region.
The biggest announcement of this release is that the @remotion/player package is now generally available - but not just that, we have some other sweet new features too!
With the <Player/> component, you can embed a Remotion Video inside a React app without rendering the video. The API is modeled after the native HTML <video> tag that many developers are already familiar with.
The API allows you to use our predefined controls, or build your own. Familiar UI patterns like volume slider, fullscreen button, as well as gesture mechanics such as click to play/pause are supported.
You can dynamically update the props of the video at runtime, which creates an experience that stuns user: videos that react to their user input!
On mobile, restrictive policies prevent auto-play of audio content. We help you architect your player so it can play audio in as many cases as possible while still respecting the browser policies. This includes the option to mount silent audio tags, activate them while the user interacts with the page and use them later to play audio.
Recently, we broke the error overlay that pops up when your code contains an error. This is now fixed and we went deeper than ever before!
The Fast Refresh and error overlay is now inlined in our codebase and allows for customization that makes sense for Remotion. The overlay now matches the dark theme of the Remotion Preview and includes handy links such as opening a file in the editor, looking for the error in our GitHub issues and our Discord community.
You can now put static files inside a /public folder in your Remotion project and load it using the new staticFile() API.
If you include the new <Player /> component in a Create React App or Next.JS project, the folder can be shared across Remotion and the framework you are using.
Data URLs are now valid sources for <Audio /> and <Video /> tags. This is useful for example for tones that are programmatically generated. To help with development of such projects, a new API was added to the @remotion/media-utils project: audioBufferToDataUrl(). See our festive Tone.js sample project for an example!
When running npm init video, there's a new template to choose from: "Audiogram"! This one allows you to convert podcast clips to clean visualizations that you can post on social media.
The next release is finally going to be our new major new release containing a refactor of our rendering pipeline and serverless rendering support. Look out as we release the missing puzzle piece in our vision of programmatic video!
You may know this feature from programs like After Effects and Davinci Resolve already. It is as simple as it is useful: You can set an “In” mark and an “Out” mark and the preview will only play whatever is in-between those timestamps. This makes it much easier to visually “debug” a section of the video without having to watch the whole thing.
Previously, in order to repeat content, you had to manually create a bunch of sequences and calculate the timestamps yourself. We added a helper called <Loop /> which will repeat it’s children either indefinitely or for a fixed number of times.
Another benefit is that we display the loop component cleanly in our timeline.
You can now change the playback rate in the editor and play a video in slow-motion, in fast-forward, and even in reverse! We support speeds between -4x and 4x. This makes debugging animations that don’t look clean much easier.
It also works in the <Player />! See the new playbackRate prop and we also added a ratechange event - just like the native HTML5 Video element.
These new shortcuts are super handy for navigating through a timeline. With the L key, you play the video as normal. Pressing the L key again will increase the speed to 2x, and pressing L three times in total will play the video in 4x.
The J key works the same, but plays the video backwards. Now you can reach any point in the video easily with just those two keys, even if the video is playing, without using the mouse.
Once you have reached the point where you want to pause the video and continue to code it, the K key will reset the playback rate to 1x and pause the video.
Once you learn how to navigate using JKL keys, you'll never use your mouse for scrubbing again!
If you wanted to delay an element but not cap it’s duration, you had to explicitly specify durationInFrames={Infinity} . Not anymore! This is now the default and may be omitted.
If you upgrade the @remotion/eslint-config package as well, we will even automatically remove the prop when you have autofix enabled!
Thanks to Khalid Ansari for implementing this feature!
In case you don’t know Fig, it is a free macOS application that provides autocomplete for the terminal. What sounds like a gimmick, actually works surprisingly well and I personally would miss it a lot if I didn’t have it!
The Remotion CLI that you can invoke using npx remotion now has full autocomplete support in Fig! You have to do nothing except install Fig.
This version came out recently and broke almost every Webpack project because legacy crypto functions were removed.
We added the necessary modifications to our default Webpack config, and even contributed a pull request to Webpack to fix the last remaining bug that would break Remotion with Node 17! If you are upgrading Node, definitely make sure get this new version of Remotion.
Contributors to Remotion would previously often struggle to correctly set up our monorepo. Indeed it was hard to correctly link all the packages and too easy to mess it up and run into error messages.
This is why we are happy to have migrated to pnpm, which gets rid of the linking problems and also speeds up installation significantly. In your CI systems, we saw build times go down by 40%, which allows to iterate much more faster.
Thanks to Sergio Moreno for implementing this migration!
A new template has been added to npm init video / yarn create video: The blank template.
This template contains only the bare minimum Remotion boilerplate and a completely empty canvas. It is especially useful for people already familiar with Remotion who would like to skip deleting the Hello World project every time.
Thanks to Aneesh Relan for creating this template!
Previously by default, a video would be rendered to out.mp4 in the root directory of your project. This also meant that in order to ignore it from Git, we had a complicated .gitignore by default that would ignore video files in the root but inverse-ignore other video files.
Time to simplify: From now on, we render a file into an out folder by default and simply ignore that folder.
Thanks to ahmadrosid for implementing this feature!
A few interesting updates for users of @remotion/three:
The Three Canvas is now wrapped in <Suspense> and the render is delayed until the content has been loaded (unsuspended). This works better with the React Three.JS ecosystem and now components such as drei’s <Environment /> component will work out of the box.
We now default to the angle OpenGL engine for Google Chrome, which we, through empirical testing, have found to have the best overall support for Three.JS content across platforms.
We opted into participating in Hacktoberfest, and put $100 bounties on 11 issues as an extra incentive!
Every single of those issue has been picked up and solved! Every contributor did a great job, many greatly surpassing our expectations!
Thank you everybody who participated and contributed to this release!