The Past, Present and Future of Hugging Face with Thomas Wolf

Nitika Sharma 30 Jan, 2024 • 4 min read

This Leading with data uncovers the insights into the open-source AI revolution of Thomas Wolf Co-founder of Hugging Face. From unconventional beginnings to pioneering the widely acclaimed Transformers library, this interview reveals the pivotal moments that have shaped Hugging Face’s impactful commitment to democratizing AI.

You can listen to this episode of Leading with Data on popular platforms like SpotifyGoogle Podcasts, and Apple. Pick your favorite to enjoy the insightful content!

Key Insights from our Conversation with Thomas Wolf

  • Open-source AI platforms like Hugging Face are pivotal in democratizing AI and fostering innovation.
  • The success of the Transformers library highlighted the community’s desire for accessible, high-quality AI tools.
  • The cost of compute is decreasing, making it more feasible for various entities to train large models.
  • The AI landscape is expected to see more open-source language models and a focus on data quality in the next 12-24 months.
  • AI’s potential applications are vast, from gaming to scientific discovery, indicating a future where AI is deeply integrated into various industries.
  • Building a culture of humility and openness is crucial for working in open-source AI projects.
  • For those starting in AI, adopting a mindset of sharing and contributing to the community is key to long-term success.

Join our upcoming Leading with Data sessions for insightful discussions with AI and Data Science leaders!

Now, let’s look at Thomas Wolf’s responses to the questions asked in the Leading with Data.

How did your diverse background lead you to AI and the creation of Hugging Face?

Yeah, my journey is a bit unconventional. I started with a passion for physics and math, believing they were the most serious career paths. However, I quickly realized that the pace of physics was too slow for my liking. After completing my PhD, I ventured into law, becoming a patent attorney. This exposed me to startups and early deep learning applications, which piqued my interest in machine learning. Eventually, I joined forces with my friends to start Hugging Face, initially aiming to create an AI companion. It’s been a fascinating ride, and I’m thrilled to be part of this rapidly evolving AI landscape.

What was the turning point for Hugging Face from an AI companion to the transformative open-source platform it is today?

The pivot happened organically. We started by open-sourcing some of our research code, which unexpectedly gained massive traction. When it was time to raise our Series A, we realized the potential of focusing on our open-source efforts. The community’s response to our Transformers library was overwhelming, and we decided to bet on this direction. We believed in the power of open source and wanted to make a positive impact by providing easy access to AI tools, models, and data.

How did the Transformers library become a cornerstone for Hugging Face’s success?

The Transformers library’s success was a series of exciting moments. It began with our adaptation of GPT-1 and winning a NLP competition. But the real game-changer was Google’s release of BERT. We quickly converted BERT to PyTorch, and the community loved it. We then merged our GPT-1 and BERT code into a single library, which became the Transformers library. It was the first time we saw such a passionate response, and it solidified our commitment to maintaining and expanding the library.

With the rise of proprietary models, how does Hugging Face maintain its commitment to open-source AI?

We believe that the cost of compute will continue to decrease, making it feasible for more entities to train large models. Open-sourcing models is not only a marketing strategy but also fosters an ecosystem where the community can contribute, fine-tune, and innovate. This approach has proven successful, and we’re seeing more open-source language models now than ever before. We’re committed to this path because it aligns with our mission to democratize AI.

What are your predictions for the AI landscape in the next 12-24 months?

I foresee more open-source language models being released, with a focus on improving data quality and exploring synthetic datasets. We might also see new architectures that aren’t Transformers, which could diversify the field. There will likely be a consolidation of products that gain widespread adoption. It’s an exciting time, with the potential for AI to unlock new discoveries in various scientific fields.

What are some unexpected applications of open-source language models that have caught your attention?

AI in games is a fascinating area, with potential for NPCs to interact in more complex ways and for dynamically created worlds. AI for science, such as healthcare and physics, is another exciting application, with the potential to make groundbreaking discoveries. The integration of AI into everyday tools to improve user experience is also promising, although it does raise concerns about making us too reliant on technology.

If you were to start a new open-source project today, what would it be?

I’d focus on creating libraries that are intuitive and easy to use. There’s still a lot to solve in AI, and I’d look for areas where current tools are cumbersome. For example, integrating language models into games or improving how we handle hallucinations in language models could be interesting challenges to tackle.

What does the future hold for Hugging Face and its open-source efforts?

Hugging Face will continue to be a platform for sharing AI resources, fostering discussions, and collaborating with the community. We’ll focus on areas like data and training tools, maintaining our culture of humility and openness. Our goal is to serve the community and enable others to build on our platform.

What advice do you have for those starting their careers in AI?

Embrace a mindset of sharing and contributing to the community. AI is still in its infancy, and there’s immense potential for growth. By being open and collaborative, you’re more likely to build something significant in the long run. Don’t be afraid to think big and long-term, but also focus on taking small, consistent steps every day.

Rapid Fire Round

  • Influential Books: I’m constantly changing what I read, but I recommend checking out my website for a list of AI-related books. Recently, I’ve been reading “The Culture Map,” which is helpful for understanding multicultural dynamics in a company like ours.
  • Work Habits: I’m definitely an evening worker, which doesn’t always align well with family life.
  • Alternate Career Path: If not for Hugging Face, I might have pursued a project related to educational toys for kids or something addressing climate change, which I’m passionate about.
  • Long-term Vision: I believe it’s essential to have long-term goals to feel like you’re moving forward in life. It’s a balance between setting ambitious objectives and managing day-to-day tasks.

Summing Up

As Hugging Face continues to be a beacon in the open-source AI landscape, its Co-founder shares a vision for the future — one marked by collaboration, innovation, and a dedication to empowering the AI community. With predictions for more open-source language models, a spotlight on diverse AI applications, and valuable advice for AI enthusiasts, this conversation encapsulates the essence of Hugging Face’s journey and the boundless potential of open-source AI.

For more engaging sessions on AI, data science, and GenAI, stay tuned with us on Leading with Data.

Check our upcoming sessions here.

Nitika Sharma 30 Jan 2024

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear