Last month we shared that Scriptic had raised $5.7 million in the second close of its seed round. This took the studio’s investment to a total of $8.2 million. The studio focuses on interactive mobile-first experiences that allow players to embark on journeys which are created with the help of artificial intelligence tools.
We caught up with Scriptic‘s CEO and co-founder, Nihal Tharoor, to discuss how this latest investment round will help the studio grow and what the future of AI in games development could look like. We also cover some best practices for using AI tools and how creatives can realise their potential rather than fear their introduction.
Pocketgamer.biz: Scriptic used to go by the name ElectricNoir. What was the decision behind this change and what does it mean going forward?
Nihal Tharoor: We were sad to bid farewell to our old name, ElectricNoir, but it felt right to rebrand to Scriptic. We had already been using the name for our storytelling platform brand and app, which is available on the App Store and Google Play Store, and had already rebranded Dead Man’s Phone to Scriptic: Crime Stories for its launch on Netflix Games.
We discussed it internally for a while, and we felt it was the right time.
What do you think investors see in Scriptic that makes it stand apart from others?
Our phone-first shows really sit at the intersection of gaming, traditional film and TV, so diverse talent is key
Nihal Tharoor
Firstly, we’re lucky at Scriptic to have an incredible team made up of truly passionate people with experience from across the worlds of gaming and entertainment. Our phone-first shows really sit at the intersection of gaming, traditional film and TV, so diverse talent is key – we’ve got team members who have worked at places like 2k and King or managed IPs for the likes of Disney and Marvel, working alongside writers from Channel 4 and the BBC.
Secondly, our titles to date have been aimed squarely at mainstream audiences, meaning we have a huge addressable market beyond traditional gamers. For example, 80% of Dead Man’s Phone players identify as true crime fans rather than gamers. That really speaks to the breadth of the opportunity we see in the space.
With regards to generative AI, we’ve been an early adopter of tools like ChatGPT and DALL-E, which you could say gives us an early mover advantage. We started using generative AI tools to support our production efforts in 2021. This means we’ve got a solid amount of experience and understanding across the team on how we can leverage these tools in the right way.
What will these new funds help to spearhead? New projects? New hires?
We’re using this funding to expand our content pipeline, as we’re always looking to explore new categories and genres. We’re also excited to start rolling out our user-generated content (UGC) offering with this funding and bring onboard external writers to use our AI-led creator service to build, share, and monetise their own stories.
We’re also expanding our team and looking to fill new roles in tech, production, marketing, creative and design. We’re really excited about a notable recent hire: James Nicholls, a games industry veteran with over 20 years experience at the likes of Codemasters, Scopely and formerly a Senior Design & Production Director at King, who joined the Scriptic team recently as our Studio Director. Across product and studio operations, James will bring to bear his deep mobile games pedigree to enable Scriptic to rapidly scale content and deliver on our UGC platform infrastructure.
From the outset, we recognised the potential of using AI tools not only for production purposes but also to foster creative collaboration
Nihal Tharoor
How does Scriptic use generative AI in projects? How do you strike a balance between using AI with real creatives?
From the outset, we recognised the potential of using AI tools not only for production purposes but also to foster creative collaboration. Our core team of writers are storytellers at heart. They use generative AI as part of the creative and production process, so AI is essentially another creative tool in the toolbox, not a replacement.
For example, when our writers were developing our horror anthology Dark Mode, they saw how AI prompts could be used as a springboard to create uncanny and surreal images. It pushed our creative boundaries and introduced us to many scary ideas that we used in the final product. We found the horror genre the perfect playground for collaboration between human creatives and AI.
In terms of our production, AI helps us tighten our processes and bring costs down, so we can inexpensively create content without jeopardising the quality. Just to give an example, considering our ‘phone first’ format requires our content to fit naturally on a vertical phone screen, we can use AI to manipulate and resize the content instead of having to do another costly shoot.
Which AI tools are the team using and find most useful? Are they easy to implement and how do the team ensure that the production quality remains high?
We’ve been experimenting with a range of AI tools in our production – as mentioned, we used DALL-E for image generation in Dead Man’s Phone and ElevenLabs’ text-to-speech in You Be The Judge, along with Midjourney and Runway for the image generation. For Dark Mode, we used Murf.ai and ChatGPT to generate audio and text.
We recognise the importance of not just using AI for the sake of it – the quality and output need to be as good or even better than how doing it in the real world would be. Through our creative processes, our writers constantly workshop the outputs from the AI and evolve the inputs to ensure the final image generated is totally in line with the tone of the game and is of the highest quality possible.
Which AI tools are you most excited about? Are there any that you can’t wait to see the next iteration of?
We’re really excited to see the development of text-to-video AI tools after using Runway Gen-2, a multi-modal text-to-video AI tool in our recently released zombie series Viral. Text-to-video AI is still in its nascent stages, but the tech is improving rapidly – as this performance and depth get better, the opportunities will be transformative for live action Scriptic shows.
We’re also excited to see emotionally dynamic text-to-audio AI get more powerful. These tools are currently brilliant for creating voice overs for less emotional audio scenes – a newsreader VO, for example. But as text-to-audio tools get better at communicating emotions like fear and rage, they’ll become much more effective for things like interactive phone calls.
While cool GenAI content can be generated in seconds, there is an art and craft to generating the highest quality output.
Nihal Tharoor
What tips or words of guidance would you offer to developers who are yet to explore generative AI?
Don’t expect everything to be off the shelf. While cool GenAI content can be generated in seconds, there is an art and craft to generating the highest quality output. Prompt engineering is best learnt through an intensive iterative process (and there are many useful resources online out there). Our creatives have developed high-quality prompts for our particular form of phone-first live action content through a process of iteration.
Can you tell us a little about what to expect from Scriptic in the future?
Where to start! We are working on a number of projects – there is new content dropping every month, and things are set to accelerate as we continue to expand our content team. Outside of first-party releases, we’re having some great conversations with teams and companies behind notable IP, and are excited at the prospect of bringing beloved properties to life through interactive storytelling.
We’re also hugely excited at the promise our technology and platform holds in the world of UGC – watch this space.