AI for Sales Automation, measurable 30X ROI for customers, leveraging Open AI ChatGPT APIs & more
This week on Towards Scaling Inference, we chat with James Lo, CEO at Mana
James is the co-founder & CEO at Mana. At Mana, they are building conversational AI tools that augment how we work, learn, and live. Rated #1 on Product Hunt and is Built with love by the team who have scaled products across Apple, Amazon, McKinsey, and SoftBank.
In this episode, he talks about their AI tools for sales automation and education. Their services have been proven to increase email open, response, and conversion rates with a 30x+ ROI for customers. Mana AI advises startups to become AI-native organizations and legacy players to use their data as a distribution advantage. His favorite Generative AI companies are Runway, Descript, and Langchain.
Prefer watching the interview?
For the summary, let’s dive right in:
Q: Could you provide some specific use cases of how Mana AI technology has helped your customers?
Currently, we offer a suite of different AI tools with both B2B and B2C components. For our B2B clients, we focus on automating sales by allowing them to describe their ideal customer profile (ICP). We then scrape those customer profiles, generate personalized messages, and send them across various channels like LinkedIn, email, and Instagram. By doing this, we free up a lot of time for sales reps and make it easier for them to convert leads through personalized messaging.
On the other hand, for our B2C clients, we focus on education. We have built an AI tutor specifically for K-12 students that enable them to ask almost any homework question. Instead of providing an answer, it guides them to understand how to get to the answer through step-by-step instructions. It can handle math problems and conceptual questions, thanks to multiple foundational models feeding into it.
Q: Is Mana’s AI-powered software built on top of third-party APIs, or is it a custom solution developed in-house?
Currently, our technology is heavily reliant on foundational models, specifically OpenAI's GPT and Cohere's embeddings. For the Hyperscale product, we leverage users’ edited drafts and high-response emails that generate positive outcomes for our clients. With this data, we employ dynamic prompting to augment the language models and provide personalized emails to our clients over time. Our goal is to enhance the capabilities of the language models to better serve our clients.
Q: Could you walk us through the process of training an AI model for your use cases, from data collection to deployment?
Our focus is on collecting data that are inherent in the model itself. For example, in our sales automation product, we have three stages. First, we help the customer source customer data by plugging into multiple different APIs, such as ProxyCurl, Contact, etc, and combine data from these different sources. We scrape LinkedIn profiles and combine that with email and corporate email data to create a comprehensive profile. Let's say you're an IT support consultancy looking for companies that are hiring IT people. In that case, we first scrape job platforms to get the companies that are hiring, then look for the company profiles on LinkedIn that are behind those job postings, and then find the heads of IT of those companies inside LinkedIn. We scrape across different email providers to find the corporate email domains of these individuals and combine all that into one profile. That's the first stage.
Once we have the customer data, we move on to personalization, using GPT4 to generate personalized emails across all of those leads. As the model is created, it gets trained naturally in this whole process. We have a lot of data on what users are searching for and the types of leads they care about, which helps us perfect the model further. We also get a lot of data on edited drafts and high-response emails, which helps us see what types of messaging are actually working for sales.
💡 For hyperscale, we have two distinct things: training the personalization for the emails, which is built on large language models, and refining the search results based on the user's past searches and the types of clients that tend to work for them. Throughout this whole process, the data collection is inherent to the product because it's the user using us day to day that actually gives us those data sources to use for the model in the future.
Q: Could you describe how you evaluate the performance of generative or predictive models? Do you use techniques such as fine-tuning or context-based generation?
Let me address the second part of the question first. Initially, we focused on fine-tuning with GPT3 since GPT4 was not yet available. We used a human loop in the middle of our infrastructure layer to fine-tune and had a fantastic product with a great user experience to help us with that. However, we found that personalization only starts kicking in after a large volume of data is used for fine-tuning. In our context, where we need to tailor the content to different clients, fine-tuning at the model level was not enough. It had to be done at a customer-specific level, which was more challenging with limited data.
💡 When GPT4 became available, we changed our approach because there was no fine-tuning option for it. Instead, we focused on the complexity of the prompting and used dynamically relevant information to generate the most relevant email template for a specific user. This approach made it easier for the user to generate content that feels natural, and we can measure its success based on open and response rates. Our main focus is on tying performance to specific business outcomes to determine whether personalization and sourcing are continually improving.
Q: What challenges did you face during the development and deployment process of the AI models, and how did you address them?
Our stack is fairly simple. We use base models and a human loop infrastructure layer for fine-tuning and A/B testing of model changes. We then deploy directly into our product. This layer acts as an interim between our trained models and user data. It reflects user data back to us and is the main API we call instead of individual models. Most of our stack is built on AWS.
Need guidance? Have questions? 🤔
Q: What benefits have your customers seen since implementing the AI-powered solution, and how have they measured the ROI?
When our customers approach us, they typically want to improve their email open rates, response rates, and conversion rates. Our services have been proven to be effective in boosting these metrics. For instance, clients who come to us with 30-40% cold email open rates can expect to see a boost to 70% or more. Similarly, if their response rates are typically in the 2-3% range, we can get them to 10-15% almost immediately upon deployment. By personalizing the emails with details like job titles and names, we can make the recipient feel like they're being contacted by a person rather than a machine.
We can achieve even better response rates with direct messaging on platforms like LinkedIn and Instagram. By getting the tone of the message just right, we can get response rates of 25% or more. For example, when we worked with a company targeting college students, we used lots of emojis, kept the message short, and avoided capitalizing words. By focusing on these details, we were able to improve response rates significantly.
Currently, our services cost $349 for 1000 qualified leads. The ROI for our customers is massive, with even just one lead conversion per month more than paying for the service. Our customers often see a 30x+ ROI.
Q: What advice do you have for organizations that are considering incorporating AI into their processes, and what are some common pitfalls they should avoid?
Yeah, so I think the answer to when to adopt AI is slightly different for different audiences. For startups, the best time is now. People are currently obtaining real economic value from AI, so implementing it across both the back-end processes and the product itself is crucial. Even our non-technical team is acting like a technical team because they use ChatGPT to do what they need to do. They learn the technical stuff in the process of talking to ChatGPT, making it important for startups to become AI-native organizations quickly.
💡 The common pitfall for startups is thinking of GPT4 as a knowledge retrieval engine, which leads them to either rely too much on it or dismiss it entirely. GPT4 is a fantastic reasoning engine, not a knowledge retrieval engine. What you feed it matters as much, if not more, than how you prompt it. The data points matter more than how you prompt GPT4 to personalize.
For legacy players, the best thing to do is to convert your data into a moat and turn it into a distribution advantage. Legacy players have all the data, and data is the only moat left. Turning the data into a moat is essential because it will allow for a game-changing product. For example, Walmart has the best supermarket API in the world, which instantly earns a 2-3% affiliate fee by funneling demand to them through the product built on top of the API. In contrast, not a single supermarket player in the UK has an API, so they can't build anything on it. Legacy players should turn their data into moats and turn them into their distribution advantages to win the game.
Q: Finally, What are your top three Generative AI companies or companies who have successfully incorporated AI in their workflows and why do you like them?
I think there are three things that really stand out to me. First, the runway product is simply amazing. The text-to-video feature always wows me. The company was one of the key players behind stable diffusion and has made significant contributions to the industry.
Second, Descript is more of a principle of what they are trying to achieve, which is effectively inventing a new user experience enabled by AI. This is the direction we should all strive towards, where it is not just a superficial layer on top of LLMs, but where LLMs actually enable new capabilities that were not possible before.
Third Langchain, it is changing how everyone deploys all apps and is super helpful for all the embeddings we are using. I love it.
Thanks for reading. 😄