Skip to main content
All CollectionsAudio
How can I make the most of my voiceover quota?
How can I make the most of my voiceover quota?

A guide on how to maximize your voiceover quota minutes.

Mmekobong Silas avatar
Written by Mmekobong Silas
Updated over 4 months ago

We recently updated premium voiceover generation to improve quality and consistency by generating voiceover using the full context of a project's script. It produces vastly improved results but might require a change in your workflow to make sure you're keeping tabs on your voiceover quota.

What problems does this update solve?

This update eliminates inconsistencies in:

  • Pronunciation of certain words

  • Pitch and speed

  • Accent

  • Voice applied

  • Volume

  • Tone of narration

When you apply a Premium voice, you can now expect consistent narration without encountering the problems above.

Will the new update affect my workflow?

Yes. Although you can still apply your premium voiceovers as before, we recommend that you finalize your script before applying a premium voice to preserve your quota. If you make any changes to your script after applying a voice, the voiceover for the entire project will need to be regenerated, and the full duration of the narration will be deducted from your quota.

Once you're ready, you'll still select Apply to generate the voiceover, and then once it is generated, it will be applied to your project.

If you do have to make changes after applying a voiceover, you can preserve your quota by making all of the changes before regenerating.

Why is this happening?

šŸ–„ļø If youā€™re interested in understanding how we came to this decision, hereā€™s a peek behind the curtain:

How did we get here?

When we decided to partner with ElevenLabs, we wanted to make our quota calculations as cost-effective as possible for our users. We did something that even ElevenLabs doesn't do ā€” allowed users to re-generate only the altered sections of their audio.

To achieve this, we generated voiceover in sections rather than as a single unit so that only a portion of the voiceover would need to be regenerated again.

Why are we changing it?

To generate audio in parts, we sent each scene as a separate script to ElevenLabs, and that caused problems. ElevenLabs is an AI tool (a fabulous one), and like any AI tool, it needs context to generate results. Sending each scene individually meant that ElevenLabs' model didnā€™t have enough context to generate the consistent and realistic voiceover our users needed.

What did we do about it?

To solve this problem, we're giving ElevenLabs what it needs to do its best work. We're sending usersā€™ entire script to ElevenLabs so that it has enough context to maintain consistency in the pitch, speed, volume, and pronunciation. We know that this might require a change to your workflow, but we're confident that the boosted realism will outweigh the learning curve.

Did this answer your question?