Human Native AI: Bridging the Gap in AI Training Data
AI systems and large language models require vast amounts of data to maintain accuracy, though it is crucial that these systems do not utilize data without proper authorization. This ethical challenge has spurred companies such as Human Native AI, a London-based startup, to develop solutions that facilitate formal content licensing agreements between rights holders and AI companies.
Human Native AI aims to create a marketplace where AI enterprises can obtain data to train their models ethically, ensuring rights holders are both consenting and compensated. Content creators can upload their works for free and engage in revenue share or subscription deals with AI companies. Additionally, Human Native AI aids in the preparation and pricing of content while keeping an eye on potential copyright infringements. The company generates revenue by taking a cut from each deal and charging AI firms for transaction and monitoring services.
The Genesis of Human Native AI
James Smith, CEO and co-founder, drew inspiration from his tenure on Google's DeepMind project, which faced its own data acquisition challenges. Recalling similar struggles across the AI sector, Smith realized the need for a dedicated marketplace. This epiphany led him to pitch the concept to his friend and engineer Jack Galilee. Unlike previous brainstorming sessions, Galilee endorsed taking the idea further. Thus, in April, Human Native AI launched, currently operating in its beta phase with promising early demand and several signed partnerships soon to be announced.
Funding and Future Prospects
This week, Human Native AI announced securing a £2.8 million seed round led by LocalGlobe and Mercuri, two British micro venture capital firms. The funds will be used to expand the team. Smith has been able to engage in discussions with CEOs of century-old publishing firms, indicating strong interest from rights holders. Similarly, major AI companies show a unified response, underscoring the necessity for such a marketplace in the AI industry.
Navigating a Complex Landscape
The problem of acquiring vast amounts of training data is pressing, especially as seen with Sony Music's recent cease and desist letters to 700 AI companies. The potential market for data licensing spans thousands, including publishers and rights holders. Human Native AI seeks to be the infrastructure through which data transactions are streamlined, benefiting large and small AI players alike. For smaller AI companies, which might lack resources to strike deals with significant publishers such as Vox or The Atlantic, Human Native AI offers an accessible solution.
The approach also revises traditional content licensing, reducing upfront costs and broadening the spectrum of potential buyers for rights holders. This strategy aims to democratize access to training data, positioning Human Native AI as pivotal in the evolving AI ecosystem.
Looking Ahead
Beyond immediate operational goals, the startup envisions a future where the accumulated data on its platform informs rights holders on optimal pricing strategies based on historical deals. Launching at a time when the AI sector faces increasing regulatory scrutiny, Human Native AI aims to set a standard for ethical data sourcing, crucial under prospective regulations in Europe and the U.S.
Smith expresses optimism for the future of AI, emphasizing the need for responsible practices that respect and support the industries supplying crucial data. His vision is one of collaborative growth, where the benefits of AI advancements are shared more equitably. "We are AI optimists on the side of humans," Smith concludes, underscoring a commitment to ensuring the ethical progression of AI technologies.