This is Startup Pirate #115, a newsletter about technology, entrepreneurship, and startups every two weeks. Made in Greece. If you haven’t subscribed, join 6,800 readers by clicking below:
Ad Astra is now two episodes in! E02 is the story of DeepMind.
A fascinating journey of a small startup in London training computers to master video games turned into a global AI leader reshaping science, medicine and more. Watch, like, comment, and subscribe on Youtube 👇
Now, on to today’s post and guest…
The Path Forward for AI
The following is a conversation with Manos Koukoumidis, co-founder & CEO of Oumi, a community of researchers, developers, and institutions working to make frontier AI more open and collaborative. We covered a number of topics:
Why open source is the future of AI
AI needs its Linux moment
Why companies behind closed source AI models are facing a hard time
Creating an open, collaborative platform for frontier AI research
Making AI systems more reliable for real-world applications
and much more. Let's get to it.
Alex: Mano, great to chat. You spent most of your work life at the largest tech companies (Microsoft, Meta, Google) and previously led efforts for Google's LLMs. What triggered you to start a company advocating for an open source AI future?
Manos: Early on, I realised that AI is one of the most, if not the most, impactful technologies we have nowadays to unlock progress in almost every industry or science and help make the world a better place. Yet, for the sake of short-sighted commercialisation and in complete misalignment with humanity's best interests, the development of this highly complex technology has become increasingly closed.
Industry leaders have become secretive, moving further from open knowledge sharing and collaboration. Instead, they feverishly compete against each other to "win," wastefully re-inventing each other's work at enormous human, capital, and environmental costs. NO. AI must be accessible for everyone to use, advance, and customise.
But even more so, in the past years, I have increasingly seen how slow progress is in areas that, because they are not monetisable, companies like Google or OpenAI (whose name is an oxymoron for what it eventually pursues) don't allocate enough resources to. Safety is one example. The technology is too complex for any company to advance effectively without openly sharing knowledge and collaborating with the rest of the AI community.
The last thing that connected the dots for me is the huge tailwinds from all the hyperscalers, GPU providers, and consumer companies that want open source AI to succeed. Only a few aspiring AI oligarchs (OpenAI, Anthropic, and Google) want closed AI to succeed. Nobody else.
Alex: I recently had an early Open AI employee on this newsletter saying that "close models will continue to have an edge in the near future". What would it take for the future of AI to be predominantly open source?
Manos: Close models will continue to have an edge for a limited time. The gap between open and closed source is thinning quickly. You saw this with DeepSeek R1 and Llama. Many closed source AI model companies, such as Adept and Inflection, struggle to justify their valuations or shut down.
This pattern is straight out of the Linux playbook. In the 1980s, the major tech players (AT&T, Microsoft, Sun, IBM) developed proprietary, closed source versions of Unix's operating system—the first OS that became mainstream by operating on numerous platforms. They seemed to be the only ones with the talent, quality, and capacity to develop Unix versions. Then, in the 1990s, Linux showed the world how open source was a better approach to developing such a critical and complex technology.
Started as a personal project by Finnish student Linus Torvalds, the Linux OS attracted attention not because it was technologically more advanced in the early days (far from it), but because it gave extra control, flexibility, and transparency with less cost. As more developers started using and contributing to it, Linux became more mature, safe, versatile, and performant than any closed Unix alternative. Today, Linux is the de facto platform, powering all cloud computing and most mobile devices, the internet, and the development of AI itself.
I believe the same thing will happen with AI. It's becoming more evident after DeepSeek—the AI Chinese company which surprised many in the AI world with its open source LLM that is more or less on par with advanced models from American tech giants built at a fraction of the cost and demanding less data centre power to run. We need someone to plant the seed and put the kindling for this fire to start burning.
For open source AI to thrive, AI needs its Linux, a place where the community collaborates and develops this technology together. That’s what we set up to build at Oumi.
Alex: You announced a $10M Seed in January and an impressive list of 13 academic institutions as early supporters. Within 24 hours after launch, Oumi became the top trending repository on GitHub. What tools do you offer to AI researchers and developers, and what's your end vision with Oumi?
Manos: We are AI developers ourselves, having worked and led teams on the foundation model projects at Google and Apple. I led efforts for PaLM and Gemini, so I have experienced first-hand the struggles of the AI community.
With Oumi, we create an open platform for building, evaluating, and deploying cutting-edge AI models at any scale, supporting foundation model research and enabling the community to work together effectively. The platform supports tasks from pre-training to post-training, such as full fine-tuning, parameter-efficient fine-tuning, preference optimisation, data curation/synthesis, evaluation, and any other common utility in a fully recordable and reproducible fashion for researchers to build on top of others' contributions.
Oumi supports all common open models (Llama, QWEN, Phi, Mistral, etc.), anything from 3 billion to 90 billion parameters, and inference from your local computer to distributed across any size cluster and AWS, GCP, Azure, Lambda, Together, etc. We even facilitate evaluating your work using benchmarks against GPT-4 or Claude.
We launched Oumi a few weeks ago with 13 universities as early partners (Princeton, MIT, Stanford, Cambridge, and others) who utilise our platform to advance their research. It's still early days for the community, so it's an excellent opportunity for people to sign up and help us build the future of Open, Universal Machine Intelligence (what Oumi stands for).
Alex: Are there any specific use cases you are excited about seeing people build with the platform?
Manos: Absolutely! One example is a team of researchers from the University of Illinois Urbana-Champaign working on open agentic models. These refer to AI systems that act autonomously to achieve complex goals without constant human guidance. They understand the goal and the context of the problem and are focused on making the right decisions to reach objectives rather than creating content like ChatGPT. It's a popular use case for enterprises, e.g. maximising sales, customer satisfaction scores, or efficiency in supply-chain processes.
Alex: Others like Meta, Mistral, or DeepSeek claim to be committed to open source AI. In your launch announcement, you called out companies developing open-weight models in a silo or even that "open data, open code, open weights are not sufficient either". Why do you think these are not enough for AI to succeed?
Manos: There are only a handful of nonprofits and academic institutions that do fully open source by the OSI standard, meaning open data, open code, open models, and open weights. The rest, including Meta and DeepSeek, produce just open-weight models (the numerical parameters that define the internal structure and decision-making logic of the model are freely available to access, modify, and redistribute while keeping the underlying training data, algorithms, and detailed architecture proprietary) or open models (offering the model's weights, its underlying architecture, training algorithms, but often not the training datasets).
Both Meta and DeepSeek published a report, and there is a decent amount of insight into how they built their models. However, there is still a lot of dark knowledge and things missing, including the code and data they used. Therefore, it's almost impossible for anybody to reproduce what they did.
Truly open source means that we can understand how something was built, be easy to reproduce it, and then continue building upon it. Open code, open data, open weight, and open collaboration are the basis for AI to thrive and what we aim to enable with Oumi.
Alex: We're used to companies raising tons of capital to build closed AI systems aiming for a head start in innovation and commercialisation to generate significant returns. What's your answer to critics who question the sustainability of an industry that's open source first?
Manos: Imagine there are two horses in the AI race. There is the closed source horse that tries to do everything by itself in its silo, bearing the full cost for its efforts like Open AI, Anthropic or Google. And then there is the horse that brings together all the cumulative innovation and compute contributions of a large global community of academic researchers, developers, and enterprises. Which horse would be faster? Which would be more cost-efficient? Which one would you bet your business on as a developer or investor?
I would bet on open source outpacing closed source AI every time. It's closed source that is unsustainable, and don't take my word for it. Look at companies like Inflection or Adept facing a rough financial reality check. Eventually, the time will come for further consolidation amongst them as open source continues to close the gap and more enterprises move to open technologies.
Alex: What do you think will happen on the next major training run for LLMs, and what areas of AI research are you most excited about?
Manos: I keep getting surprised by the ingenuity and creativity of AI researchers. Most recently, we had DeepSeek and iterative reinforcement learning (the model teaches itself to think deeper about a problem).
It's hard to predict the next improvement that will unlock progress in the space. We are gradually hitting scaling limits in how much compute we can allocate to model training, and the next innovations will likely originate from human creativity — better algorithms, data, or teaching the models to improve themselves. This goes back to my previous point that an open community can progress much faster.
I'm very excited about making AI more reliable for organisations (in science and industry alike). Many cutting-edge AI systems are very capable but unreliable for critical settings, i.e. Should a pharma scientist developing the next drug fact-check the results or actions of a model? Do we need a human-in-the-loop to verify the responses? For many use cases, having a model that is 90% accurate doesn't make the cut. It's just not good enough.
I'm looking forward to the work of researchers and developers making AI more reliable and enterprise-friendly using Oumi.
Alex: Appreciate it, Mano!
Manos: Thank you, Alex.
Jobs
Check out job openings here from startups hiring in Greece.
News
Saronic (autonomy for naval and maritime missions) raised $600m Series C led by Elad Gil.
RoomPriceGenie (hospitality revenue management) raised $75m from Five Elms Capital.
Achira (AI drug discovery) secured $33m led by NVIDIA and Dimension.
Comulate (insurance AI) raised $20m Series B led by Bond and Workday.
Nodes & Links (AI schedule management platform) secured $12m Series B led by ETF Partners.
RankBee (marketing analytics) secured €360K from Theti Club.
CyberScope (blockchain security) was acquired by cybersecurity firm TAC Security.
Viva (neobank) acquired majority stake in Fiskaltrust (compliance-as-a-service).
The Hellenic Center for Defence Innovation announced its first projects.
Resources
Leadership in crisis by Dimitris Glezos, founder at Transifex.
The Data Career Compass, a series of posts for data professionals with Antonis Angelakis, Senior Data Analyst at Chubb.
Skroutz started as a hobby with George Hadjigeorgiou and George Avgoustidis, founders at Skroutz.
How a $6 billion fund of funds invests with Marcos Veremis, Partner at Accolade Partners.
From AlphaGo to AGI with Ioannis Antonoglou, co-founder at ReflectionAI.
Acknowledging the EU's competitiveness gap by The Greek Analyst.
Scaling Revolut’s organisation to 8,000 employees with Alexandra Loi, Chief People Officer at ESL FACEIT Group.
Building a company to simplify online purchasing with Rania Lamprou, co-founder & CEO at Simpler.
Engineering management and AI code generators with Petros Amoiridis, Owner at Amignosis.
Neo-industrialism for Europe by Kyriakos Tsitouridis, founder at Mellon Labs.
Events
Proper Input Validation in Java Spring Boot by DevSecCon Greece on Mar 7
Women Techmakers Greece by Google on Mar 9
Patras Tech Talk 2025.03 Spring edition on Mar 11
Angular Athens 25th Meetup on Mar 11
6th Athens eCommerce Meetup on Mar 12
24th Athens Laravel Meetup on Mar 13
L&D Hub Meetup # 1 on Mar 13
Clean Architecture in Go by Athens Gophers on Mar 14
Shift Happens # 2: Tech Careers Unlocked on Mar 15
“The Thinking Game” Screening by Big Pi Ventures on Mar 17
AI & Greece: An Emerging Innovation Destination by Hellenic Innovation Network on Mar 20
Beyond Software Automation & Testing Panda by Thessaloniki Software Testing and QA Meetup on Mar 20
Kubernetes Athens vol26 on Mar 21
That’s all for this week. Tap the heart ❤️ below if you liked this piece—it helps me understand which themes you like best and what I should do more.
Find me on LinkedIn or X. See you in two weeks for a fascinating startup journey from 0 to $20m ARR!
Thanks for reading,
Alex
I wonder what deal flow are the so called "impact investors" looking at if not companies like Oumi. Thank you for writing and sharing with us their story, Alex!
Curious if we will see more such platforms and communities. Knowledge platforms driven by a variety of individuals focused on research or business knowledge and playbook creation (something like a GitHub but not just for coding)