Innodata Inc. (NASDAQ:INOD) Q1 2025 Earnings Call Transcript
Innodata Inc. (NASDAQ: INOD ) Q1 2025 Earnings Call Transcript May 8, 2025
Innodata Inc. beats earnings expectations. Reported EPS is $0.22, expectations were $0.17.
Operator: Good afternoon, ladies and gentlemen, and welcome to the Innodata First Quarter 2025 Results Conference Call. At this time, all lines are in listen-only mode. Following the presentation, we will conduct a question-and-answer session. [Operator Instructions] This call is being recorded on Thursday, May 08, 2025. I would now like to turn the conference over to Amy Agress, General Counsel at Inodata Inc. Please go ahead.
Amy Agress: Thank you, Lovely. Good afternoon, everyone. Thank you for joining us today. Our speakers today are Jack Abuhoff, CEO of Innodata, and Marissa Espineli, Interim CFO. Also on the call today is Aneesh Pendharkar, Senior Vice President, Finance and Corporate Development. We'll hear from Jack first, who will provide perspective about the business, and then Marissa will follow with a review of our results for the first quarter. We'll then take questions from analysts. Before we get started, I'd like to remind everyone that during this call, we will be making forward-looking statements, which are predictions, projections or other statements about future events. These statements are based on current expectations, assumptions and estimates and are subject to risks and uncertainties.
Actual results could differ materially from those contemplated by these forward-looking statements. Factors that could cause these results to differ materially are set forth in today's earnings press release in the Risk Factors section of our Form 10-K, Form 10-Q and other reports and filings with the Securities and Exchange Commission. We undertake no obligation to update forward-looking information. In addition, during this call, we may discuss certain non-GAAP financial measures. In our earnings release filed with the SEC today, as well as in our other SEC filings, which are posted on our website, you will find additional disclosures regarding these non-GAAP financial measures, including reconciliations of these measures with comparable GAAP measures.
Thank you. I will now turn the call over to Jack.
Jack Abuhoff: Thank you, Amy, and good afternoon, everyone. Our Q1 2025 revenue was $58.3 million, a year-over-year increase of 120%. Our adjusted EBITDA for the quarter was $12.7 million or 22% of revenue, a 236% year-over-year increase. We finished the quarter with $56.6 million of cash, which is a $9.7 million increase from last quarter. Our $30 million credit facility remains undrawn. We're pleased with our financial results this quarter, which by the way, came in ahead of analyst revenue estimates. But what's even more exciting is the meaningful progress we've made on our strategic growth initiatives, much of it in just the past few weeks. I'd like to take this opportunity to walk you through the progress we're making across four of our most dynamic solutions areas, highlighting how we're aligning with the evolving customer needs and how these efforts are driving, both new customer wins and meaningful account expansions.
Let's first look at the work we do collecting and creating generative AI training data. We are very focused on building progressively more robust capabilities to feed the progressively more complex data requirements of large language models, as they advance toward artificial generalized intelligence or AGI, and eventually, artificial superintelligence or ASI. We have made and we continue to make investments toward expanding the diversity of expert domains like math and chemistry for which we create LLM training data and perform reinforcement learning, while also investing in expanding languages like Arabic and French within these domains and creating the kind of data required to train even more complex reasoning models that can solve difficult multi-step problems within these domains.
We're also developing progressively more robust capabilities to collect pre training data at scale. The advancements that we have made and continue to make and the investments we have made and continue to make have enabled us to gain traction with both existing customers and potential new customers. I'll take potential new customers first. We're in the process of being onboarded by a number of potentially significant customers. I'm going to share four of them with you now. The first is a global powerhouse building mission critical systems that power everything from multinational finance and telecommunications to government operations and cloud infrastructure. It is integrating large language models and AI across its cloud infrastructure and enterprise applications to enhance automation, productivity and decision making, and also embeds generative AI directly into horizontal and vertical applications.
The second is a cloud software company that is revolutionized the way businesses manage customer relationships and it is leveraging large language model and AI to enhance customer relationship management and enterprise operations and is taking leadership position in launching in launching agentic AI capabilities to autonomously handle complex enterprise tasks. The third is a Chinese technology conglomerate that operates one of the world's largest digital commerce ecosystems. It has built its own family of LLM models incorporating hybrid reasoning capabilities and supporting multiple modalities including text, image, audio and video. Its models are widely in use for a variety of horizontal applications as well as industry specific applications.
And the fourth is a global healthcare company that is a leader in advanced medical imaging, diagnostics, and digital health solutions. It is actively integrating LLMs and AI to enhance diagnostics, streamline clinical workflows, and improve patient outcomes, developing foundation models capable of processing multimodal data including medical images, records, and reports. Now when it comes to existing customers, we're seeing major expansion opportunities, some we've already won and others we expect to win in the near term. I'll share a few examples to illustrate the kind of traction we are now seeing. I'll start with three of our big tech customers, which until recently were relatively small accounts for us, but which are now showing signs of meaningful expansion.
I'll also touch on the continued strong momentum we're seeing within our largest customer. The first example is a customer we started working with in the second quarter of last year. Now in 2024, we recognized only about $400,000 of revenue from them. But today, by contrast, we have late-stage pipeline that we value as having the potential to result in more than $25 million of bookings this year and continued growth over the next several years. This customer is one of the most valuable software companies in the world. The problem we are helping them solve is that their generative AI, both text and image, has not been doing a good job handling very specific, detailed and complex problems. They've shared with us that improving on these fronts was critical in order to improve product experience and provide a foundation for multimodal reasoning and agentic models of the future.
So here's a great example of an investment we are making that has specifically resulted in traction with this customer. We developed an innovative data generation pipeline that enables domain experts to create detailed hierarchical content labels across modalities, while continuing to evolve the underlying taxonomies. Our approach supports multiple types of gen AI workflows, including detailed descriptions, reverse prompting and highly specific evaluations. The second example of an existing customer of which we're seeing major expansion is a big tech with which we just had $200,000 of revenue last year. But again, by contrast, today, we are actively collaborating with them, resulting in two new wins in Q2 to-date, one signed and one we believe is about to be signed that we value at approximately $1.3 million of potential revenue.
We also have another opportunity with them that we value at about $6 million of potential revenue. It's in the pipeline, and I'll talk about that more in a few minutes. The third example is a big tech hyperscale with extensive generative AI capabilities across both its consumer and enterprise businesses, where it offers foundational models together with custom silicon optimized for AI workloads. We believe we will soon be engaged by it to support pretraining data collection for very specific specialized models. We'll also talk in a few minutes about additional expansion that we're driving at this account in terms of model safety and evaluation. The fourth example is one of the most highly regarded generative AI labs. We just signed a new data collection deal with them we value at approximately $900,000 of potential revenue, and we're discussing an expansion that could potentially double that.
Pretraining data collection in the form of curated text corpora, as well as multimodal datasets remains a cornerstone for big tech companies racing to build next generation LLMs. As models grow more sophisticated, their performance hinges not just on raw computational power, but also on the breadth, depth and quality of the data they are trained on. Continuous data acquisition enables the models to better understand nuance, context, and intent across languages and domains. We believe that each of the companies I just mentioned is likely budgeting several hundred million dollars per year on generative AI data and model evaluation. So, the traction we are now seeing is super exciting and is very much the result that we have been working toward under our business plan.
Lastly, we also see expansion opportunities with our largest customer. Literally just this morning, we signed a second master SOW with our largest customer that we anticipate will enable us to deliver gen AI services funded from a distinct budget category within the customer's organization, separate from the budget that supports our existing engagements. We believe this new budget to be materially larger. Now to prepare ourselves to deliver services under this new SOW, we are making investments in customizing our proprietary LLM data annotation platforms specifically for the work that will be required under this new SOW, and we are building some additional service support capabilities. Another major area of strategic focus for us is building agentic AI solutions for our big tech customers, as well as our enterprise customers.
With one of our smaller big tech relationships, one that I discussed a few minutes ago, we have begun a collaboration around both AI agent data set creation and AI agent building. The work we are hoping to kick off with them this quarter will involve creating approximately 200 conversational and autonomous agents across multiple domains. The work involves defining use cases, developing synthetic knowledge corpora, generating demonstration datasets, building and debugging agents, and then managing agent orchestration. We believe this opportunity has the potential to be worth approximately $6 million to start. We believe agent-based AI is going to serve as the cornerstone technology that unlocks the full value of large language models and generative AI for enterprises, transforming them from powerful but isolated tools into autonomous goal driven systems that can reason, take action, and drive measurable business outcomes at scale.
Agentic AI refers to artificial intelligence systems that can autonomously initiate and carry out complex tasks in pursuit of specified goals with minimal ongoing human input. These systems go beyond reactive execution. They exhibit goal oriented behavior. They make decisions. They adapt to changing contexts, and they even take initiative to achieve outcomes. In contrast to traditional AI, which typically responds to prompts or instructions, Agentic AI is designed to operate with a degree of independence, managing multistep processes, reasoning through uncertainty, and dynamically adjusting actions based on feedback. It represents a shift from AI as a tool to AI as a collaborator, one that can understand objectives, plan strategically, and act accordingly.
Now on the subject of unlocking value for enterprises, in the last several months, we have won engagements that we value at approximately $1.6 million helping one of the world's largest social media companies integrate gen AI into their engineering operations. We were in active discussions about expanding this successful effort to other business units within the customer as well. We are providing integration services, prompt endearing, program management and on-site consulting for implementing generative AI. So far, we have automated five workflows, which we estimate will help our customer generate approximately $6 million in cost savings. The plan is to automate about 60% of 90 identified workflows by the end of 2025. And for this to result in at least $10 million of additional savings for this customer this year, while providing additional benefits in terms of reduced friction and increased development velocity as the engineering team can more rapidly prototype, test and refine solutions.
We are also in advanced discussions with several other companies about helping them use generative AI to enhance both products and operations. Now we've discussed how our investments and expanded capabilities in LLM training data creation and agentic AI are fueling a surge in customer engagement. We're seeing that same momentum carryover into the work in generative AI trust and safety, marking a significant expansion of our presence in a fast-growing mission critical segment of the market. We are pleased to announce that we have won expanded engagements to provide trust and safety evaluations for one of our existing big tech customers, again, not our largest customer, but one of the smaller relationships that's now successfully expanding. The engagements together have a potential value of approximately $4.5 million of what we believe will be annual recurring revenue.
We just started ramping the engagements up a couple of weeks ago. We anticipate working across several of their divisions, spanning English, Spanish, German and Japanese languages. We anticipate providing ongoing testing of both their public models as well as their beta models that they have not yet launched. Under these engagements, we anticipate testing both generic models and domain specific models as well. For example, we might help ensure that a model trained to assist chemists and nuclear scientists who refused to provide advice on how to build a bomb or create crystal meth. Again, our willingness and insight to make investments proved critical in enabling us to capture this opportunity. We bolstered our proprietary trust and evaluation platform with some innovative features that our customer found compelling.
Just last week, the customer completed security reviews of our platform, enabling us to start work this week. We believe there is a near term potential to expand further our trust and safety work with this customer. We intend to be running paid pilots for other trust and safety workflows over the next few months. And to support this opportunity, we've invested in methodologies for predicting emerging areas of user interaction with advanced language models, enabling us potentially to proactively surface and address high risk topics for trust and safety assessment. We recently demonstrated this capability to our customer, who responded with strong enthusiasm. Notably, part of these engagements involves evaluating LLMs embedded in physical devices and robotics, with which our teams will be working directly in our customers' labs to test performance at the hardware level.
With another enterprise customer, one that I mentioned earlier, we have been shortlisted as lead vendor for a multiyear program aimed at evaluating the customer's generative AI foundation models for potential harms, bias and robustness. We anticipate the annual recurring revenue of this engagement is one to be approximately $3.3 million. We are currently conducting proofs of concept that encompass adversarial testing, model probing, and early stage fine tuning pipelines. The proposed production scope includes comprehensive red teaming, implementation of guardrails, and rigorous evaluation of model behavior across text, image, video, and audio outputs. In the first quarter, we introduced our generative AI test and evaluation platform as NVIDIA's GTC 2025.
This enterprise grade solution is designed to assess the integrity, reliability, and performance of large language models across the full development lifecycle, from pre-deployment refinement to post-deployment monitoring, enabling both internal operational use cases and external customer facing applications. MasterClass served as our inaugural charter customer, and we are now in active discussions with several additional high-profile enterprises with diverse generative AI deployments. In addition, we are in active discussions with one of the world's leading global consulting firms, regarding a potential go-to-market partnership that would position them as a strategic distribution and implementation channel for our platform. From a competitive differentiation standpoint, the platform encapsulates a range of advanced techniques developed through our ongoing services engagements with leading big tech customers.
These capabilities are now productized into an autonomous system that allows enterprises to benchmark, evaluate, and continuously monitor their agents and foundation models. The platform supports evaluation against high quality standardized benchmarks across key safety dimensions, including hallucination, bias, factual accuracy, and brand alignment, while also enabling customization through client specific safety vectors and proprietary evaluation criteria. A key feature of the platform is its continuous attack agent, which autonomously generates thousands of adversarial props and conversational probes to uncover vulnerabilities in real time. Detected issues are flagged for review, allowing customers to take swift remedial action. Recommended mitigation strategies may include tailored system message design and the generation of supplemental fine-tuning datasets.
The platform is currently available through an early access program for enterprise customers with general availability targeted for late Q2. Trust and safety evaluation is critical at both the development and production stages of large language models. During development, rigorous testing including adversarial red teaming is essential to uncover vulnerabilities, biases, and harmful behaviors before models are deployed. This proactive approach enables developers to build safety guards into the model architecture and fine tuning processes. In production, continuous evaluation ensures that the models remain aligned with safety standards as they interact with real users and evolving contexts. Together, these measures are vital for ensuring that LLMs operate responsibly, mitigate risk, and maintain user trust at scale.
We believe the rapid adoption of agentic and multi agent systems will push us to a new phase of complexity when it comes to trust and safety. In our most recent quarterly earnings reports, the magnificent seven technology companies, Apple, Microsoft, Amazon, Alphabet, Meta, NVIDIA, and Tesla, have each underscored their commitments to generative AI investment, viewing it as a pivotal component of their future growth strategies. Microsoft has announced plans to invest approximately $80 billion in AI infrastructure during fiscal 2025, aiming to build data centers designed to handle artificial intelligence workloads. Meta has raised its capital expenditure guidance to $64 billion to $72 billion for 2025, reflecting increased investment in AI infrastructure, including the development of new AI tools such as Llama 4 and a standalone AI assistant app.
Amazon is expanding its AI capabilities, particularly within its cloud computing division, AWS. In his annual letter to shareholders, the Amazon CEO emphasized the Company's aggressive investment in AI writing, quote, we continue to believe AI is a once in a lifetime reinvention of everything that we've done. Alphabet, meanwhile, reported a 20% increase in operating income and a 46% rise in net income in Q1 2025, attributing this growth to its unique full stack approach to AI, which encompasses infrastructure, models and applications. Given this sentiment and the significance of the Magnificent Seven and other large global technologies to our revenue stream, we do not believe that short term business cycles or trade policies have much of an impact on our business prospects.
It is worth noting how bullish sophisticated venture capital investors are on our sector. Our largest direct competitor is reported to be close to finalizing a secondary stock sale dialing the Company at $25 billion a multiple of 29x last year's reported revenue of $870 million, which came with reported EBITDA loss of $150 million. Today, we are reaffirming our full year revenue growth guidance of 40% or greater. As the breadth of activity across our business illustrates, we believe the current momentum positions us well for continued strong performance. I want to say something about how we intend to manage the business over the next couple of years. Our intention is to embrace growth from both the broadening customer footprint and our largest customer.
I've shared with you today how we are achieving significant success with the diversity of large customers that we believe could become material contributors over the coming fiscal periods. At the same time, we also see significant growth potential with our largest customer. We believe this customer will continue to expand its overall relationship with us and we are deeply aligned with its long-term roadmap. Given that we intend to drive growth from this broadening customer footprint and our largest customer at the same time, we intend to embrace customer concentration as a natural part of our evolution. Many leading technologies companies have seen similar patterns, an early period of customer concentration followed by a broad-based growth, as the value proposition matures and adoption scales.
We believe we are following that same path and remain confident in our ability to continue executing with discipline, while building a durable, diversified revenue engine. Inevitably, customer concentration can result in quarter-to-quarter volatility. For example, with our largest customer, we exited 2024 at an annualized revenue run rate of approximately $135 million. In Q1, we were running higher than this by about 5%, and in Q2, we anticipate that we could be lower by about 5%, but the customers' demand signals are updated continually and are highly dynamic. Going forward, we do not intend to provide granular updates at a customer level. Our 2025 financial plan reflects our conviction in the scale of the opportunity ahead. We believe we are well-positioned to drive business with an increasingly diverse group of leading big tech companies and enterprises and become a market leader in one of the most transformative technology cycles in decades.
Accordingly, we intend to reinvest a meaningful portion of our operating cash flow into product innovation, go-to-market expansion and talent acquisition, while still delivering adjusted EBITDA above our 2024 results. This too is an intentional strategy aimed at capturing long-term value in a rapidly growing and strategically important market. I'll now turn the call over to Marissa to go over the financial results, after which Marissa, Aneesh and I will be available to take questions from analysts.
Marissa Espineli: Thank you, Jack, and good afternoon, everyone. Revenue for Q1 2025 reached $58.3 million, representing a year-over-year increase of 120% and demonstrating strong momentum to start the year. Adjusted gross margin was 43% for the quarter, up from 41% in Q1 of last year. As we've discussed previously, we target an adjusted gross margin of around 40%. So, we're pleased to have exceeded that benchmark to begin the year. Our adjusted EBITDA for Q1 2025 was $12.7 million or 22% of revenue compared to $3.8 million in the same quarter last year. Net income was $7.8 million in the first quarter, up from $1 million in the same period last year. We were able to utilize the benefits of accumulated net operating losses or no call in Q1 to partially offset our tax provision.
Looking ahead, barring any changes in the tax environment, we expect that our tax rate in the coming quarters to be approximately 29%. Our cash position at the end of Q1 2025 was $66.6 million, up from $46.9 million at the end of Q4 2024 and $19 million at the end of Q1 2024, reflecting strong profitability and disciplined cash management. We still have not drawn on our $30 million of Wells Fargo credit facility. The amount drawable under this facility at any point in time doesn't mean based on the borrowing-based formula. We've been actively engaged in investor relation activity over the past year and expect to build on that momentum in the months ahead. We'll be participating in several upcoming investor conferences and non-bill roadshows to continue to increase awareness and deepen relationships with institutional investors.
Looking ahead, as Jack mentioned, we're planning targeted investments to expand our capabilities. This includes continued investment in technology to support both current and prospective customers in their AI journey as well as increasing strategic hiring in sales and solutioning to drive long term growth. In Q2, we plan to invest approximately $2 billion to support a new statement of work and related programs with our largest customer, as Jack noted earlier. We expect that this investment will occur ahead of the associated revenue and is expected to temporarily impact margins in that quarter. We review this as a strategic investment that helps position us to meet customers' evolving needs and to build on the land and expand success we've already achieved with them.
As always, we'll remain disciplined in managing our cash and expenses, while continuing to invest where we see strong return potential and meaningful long-term value for shareholders. That's all from my end and thanks everyone. Lovely, we're ready to take questions.
To continue reading the Q&A session, please click here .
Comments
Post a Comment