Controlling Machines With Your Mind
Brain-computer interfaces: where are we now? with Cogitat, reduce customer churn, making sense of numbers, 6 funding rounds, jobs, events, and more
Before we get into it this week, quick reminder: this newsletter is about startups and technology. We look into bold teams that solve an exigent problem in the world, explore tech trends, learn startup lessons, and find out what matters in Greek tech. Subscribe below, join 4,163 others, and come aboard.
You can find me on Twitter and LinkedIn.
-Alex
Controlling Machines With Your Mind
We learn:
What are brain-computer interfaces?
What is the difference between invasive and non-invasive technologies? Think Elon Musk’s Neuralink vs. Snap’s NextMind.
Are there any transformational applications of brain-computer interfaces today?
How far are we from wider adoption and what are the new paradigms that can unlock innovation?
Today, I’m really excited to have Dimitrios Adamos, co-founder & CTO of Cogitat, on Startup Pirate to deep dive into a fascinating technology that enables a direct communication pathway between the brain's electrical activity and machines. Cogitat’s software reads brain waves and converts thoughts into actions, harnessing neuroinformatics and machine learning. The team announced a pre-seed round a year ago and its groundbreaking research comes out of Imperial College London.
Let’s get to it!
It’s great to talk to you, Dimitri! Needless to say, I find what your team is building extremely interesting. I dipped my toes into brain-computer interfaces in the past while researching how computers can recognize and interpret human emotions through biosignals. But it’s been a while, so I want to hand it off to you to start. What do we mean by brain-computer interfaces?
DA: Thank you for having me, Alex! So, a brain-computer interface (BCI) enables direct communication between the activity of the human brain and an external device, which can be a computer or even a robotic arm. This is the high-level definition. There are different types of BCI depending on what type of technology is used to record brain activity. For instance, it can be invasive or non-invasive, ranging from chips implanted onto the cortex to wearable devices that you can mount on your head. And there are different types based on which brain functions they focus upon e.g. we have active BCIs where an individual moves a mouse pointer on the screen by merely thinking about it or reactive/passive BCIs where we assess the individual’s attention to objects or the level of relaxation that they experience.
Now, all BCI technologies try to decode the electrical activity of the human brain. You have billions of neurons, the fundamental units of the brain, that use electrical impulses and chemical signals to transmit information between different areas of the brain, and between the brain and the rest of the nervous system. These neurons fire in different ways depending on the actions you do, what you think, etc. The most popular non-invasive technologies use electroencephalography (EEG) signals (a.k.a. brain waves) and detect the electrical activity of your brain through small sensors that are attached to the scalp. Of course, not every electrical activity of the brain can be translated accurately and used for brain-computer interfaces. However, and this is where things start getting interesting for real-world applications, there’s a rather definite way to map brain activity in the premotor/motor cortex, where planning and execution of movements occurs. This is because, unlike the abstractness of mental thinking or other types of cognitive processing, there’s a well-defined ground truth when it comes to physical movements (I closed my hand, moved my feet, etc), which we can label and leverage to train machine learning algorithms. Most interestingly, even if the user imagines movement tasks (without performing them), similar brain waves emerge, and this is where the magic starts! As a side note, ML plays a critical role in the field of brain-computer interfaces, as it’s all about mapping the right data from brain signals.
To many people, this might sound, you know, kind of science fiction, especially the invasive side of it (implanting chips onto the cortex). Is there an example you can think of where brain-computer interfaces are currently truly transformative?
DA: In my opinion, the most ambitious venture right now is Blackrock Neurotech backed by investors such as Peter Thiel. They have been pioneering in the field of invasive BCI for years, using brain implants to restore function in people with paralysis and neurological disorders — they enable people to walk, talk, see, hear, and feel again. In fact, they support many patients globally, including the person with the longest chronic implant still active and lasting more than seven years. That takes a lot of effort and it's a high-risk process. They’ve been running clinical trials for years, and I read that they plan to commercialize their platform next year.
Why do you think BCI hasn’t reached wider adoption yet? I understand there are many moving parts — making progress in neuroscience, hardware, software, etc. But, in your view, what could contribute to wider adoption?
DA: If we’re talking about mainstream adoption, like being able to buy this from an electronics store, then we must narrow our discussion down to non-invasive technologies. Invasive technologies such as Blackrock Neurotech, Elon Musk’s Neuralink or Synchron, are very far out in the spectrum of mainstream adoption, and I cannot even tell with certainty that there will be ever. Invasiveness brings a lot of risks. Like, what happens if a company goes out of business and you need to change your implant, or you need a software upgrade?
At the moment, non-invasive EEG devices are the only options we can reasonably consider. This technology has been tested for decades, but up to now, there have been two main challenges. First, the non-stationary nature of brain wave signals and the variability in the underlying brain processes with dynamical transitions of functional states. Think how fast we transition from rest to voluntary movements to cognitive processing, and back. This makes it extremely challenging for traditional machine learning approaches to decode brain waves in a robust manner. In order to make this work, traditional BCI approaches required extensive calibration from the user prior to each use, which makes it less convenient and practical — pretty much the same way every time you use a VR headset you have to define the exact space where you can move inside, but much more than that. Second, and most importantly, you can't really convince a consumer to wear a headset on his head and replace a joystick or a mouse or any other input controller that just works.
To unlock mainstream adoption, as happened with many technologies that reached this stage, BCI needs a new paradigm. And I believe this is going to be virtual reality and in particular VR headsets with EEG sensors. Let me explain why. In a virtual reality environment, the rules change. Movement in a physical space is not always an option. How can you walk in VR when physically seated on an office chair or an airplane seat? How can you simulate the physical world in VR when you often have such limited physical space to move around? Say the living room of your house that’s full of furniture. Handheld controllers create a number of limitations and BCI technology can save the day. Decoding motor imagery has been the holy grail for BCI, it’s now reaching a point where we have mapped this quite well, and in my opinion, thought-enabled VR technology (e.g. moving in VR merely by thinking) will also help VR reach wider adoption by addressing existing limitations. And we are not that far from seeing this happen. The first headset that enables this new paradigm is already here by OpenBCI. Apple has filed patents for EEG sensors on augmented reality devices, Meta recently announced research results from using AI to decode speech from brain activity, and Snap acquired NextMind to help its augmented reality research efforts (hardware device that you put on your head and detects neural activity from the visual cortex through EEG and translates it into digital commands in real-time).
I’d like to zoom into Cogitat now. I think what you’re building fits quite nicely with the future you just described, right? Please tell us more about your technology and the applications it enables.
DA: At Cogitat we enable immersive interactions powered by thoughts alone. We translate brain activity captured by any device into actions in the digital world. Think of it as a software layer, an API let’s say, that abstracts the complexity of mapping brain signals with their equivalent human actions, and other companies and developers could use it to build different applications. You think about moving your feet or your hands and we translate this into moving or steering in a virtual environment. As more and more VR headsets incorporate sensors to capture parts of our brain activity, the need for such a platform becomes imminent. We’ve seen the value creation that platforms, which give customers full strength of an entire group of uses cases without having to build things from scratch themselves, have enabled in other industries (Stripe in finance, Twilio in communications, etc) and we want to lead this change with brain-computer interfaces.
At the core, this is a brainwave decoding technology for all EEG devices. There’s some extremely complicated neuroinformatics and machine learning science behind this and our award-winning technology comes out of one of the most recognized ML labs in the world, that of Imperial College London led by Cogitat’s co-founder and Scientific Lead, Stefanos Zafeiriou. You can watch this video with a demo of our technology.
Also, remember how I told you an important part of why BCI isn’t ready for prime time yet is the recalibration required every time you want to use a device? Cogitat solves this too. We are currently under commercial development for applications in entertainment and healthcare and we work not only with EEG headsets used for research but also consumer-grade headsets that are of lower cost and easy to fit at home.
I understand that a big part of what you do is getting data to improve the accuracy of your technology, which is easier said than done when it comes to human brain signals. How do you address that?
DA: Developing a very accurate technology when it comes to mapping brain signals to human actions, means accumulating more and more brain wave data from real people performing everyday tasks. It’s a problem that spans neuroscience and machine learning. We’re lucky to have a talented team of 12 people, full of professors, researchers and engineers in Greece and the UK, bringing expertise from both neuroinformatics and machine learning and working hard to design the right experiments, get enough of the right data, and separate the signal from the noise. I want to point out that we’re the first company to bring real-world data into the game. We’ve already performed a large number of experiments with hundreds of individuals and different BCI headsets, where people test our technology and that’s how we get the much-needed data to improve. Most companies out there work with a very limited breadth of data points, mostly from very strictly defined settings and artificial exercises. You’re most welcome to come by our lab at Imperial College London and participate in the experiment too by playing our VR games, while we collect data from your brain waves. We continue to gather data from the population and this is of critical importance to us, alongside further development on other fronts such as strategic partnerships, research methods, software engineering, etc.
All in all, developing technology for this very challenging space requires a lot of expertise and domain awareness. I think the door has opened for wider adoption of brain-computer interfaces and we strongly believe that Cogitat will be the software layer that will empower a great number of new applications and use cases that were previously unthought of — a new platform that unlocks innovation in BCI.
Dimitri, it was great to talk to you! I appreciate you taking the time.
DA: Thanks so much, Alex!
Startup Jobs
Looking for your next career move? Check out 883 job openings from Greek startups hiring in Greece, abroad, and remotely.
News
AI chip startup Axelera AI landed $27m in capital to commercialize its hardware.
ChainSafe Systems raised $18.75m Series A to create open-source infrastructure and tooling to empower developers to build the decentralized web.
QuantPi announced €2.5m Pre-seed to eliminate the uncertainty that surrounds delivering AI systems, by bringing quality control to every step of the development process.
Facial recognition startup Zenus announced a $3.2m Seed round.
Mintify, a marketplace for professional NFT traders, raised $1.6m Seed.
Medical virtual reality training startup, Orama VR, announced €2.4m funding.
Startup acceleration program GreenTech Challenge by ESU NTUA accepts applications until Dec 2.
Startup Profiles
Interesting Reads & Podcasts
Making sense of numbers from George Hadjigeorgiou, co-founder & CEO of Skroutz, here.
Video from the latest Open Coffee Athens with Orfeas Boteas, founder & CEO of Krotos, George Koutsoudopoulos, CEO at tgndata, and Dimitris Georgakopoulos, co-founder of Buildium.
Podcast on reducing customer churn with Andrew Michael, founder & CEO of Avrio.
Permissionless tools based on blockchain that people can use to maintain their sovereignty, by Odysseas Lamtzidis, Engineer at Nomad, here.
Details on Greek government-backed loans for startups in this presentation from Tech Finance Network, here.
Narratives, ideas as viruses, and the zero marginal cost of reproduction when it comes to ideas adoption by John Raptis, Frontend Engineer at Clerk, here.
Improving feedback and coaching in an organisation by Aristidis Catsambas, Senior Manager for Special Projects at Monzo, here.
Marco Veremis, founding Partner at Big Pi Ventures, discussing with Lars Rasmussen his journey creating Google Maps, and more.
Events
“Accessibility testing and automating mobile applications” by Ministry of Testing Athens on Oct 31
“Startup expansion to the US” by MITEF Greece on Oct 31
“27th WordPress Thessaloniki Meetup” by Thessaloniki WordPress Meetup on Nov 5
“Performance testing with k6 & accessibility testing standards” by Athens SDET Meetup Group on Nov 7
“‘Prepare for what "Loom’s ahead’ with Dr.Heinz Kabutz” by Thessaloniki not-only Java Meetup Group on Nov 8
“Meetup #6 (Virtual)” by React 2 React Athens Meetup on Nov 9
If you’re new to Startup Pirate, you can subscribe below.
Thanks for reading and see you in two weeks!
P.S. if you’re enjoying this newsletter, share it with some friends or drop a like by clicking the buttons below ⤵️