South by Southwest (SXSW) is an annual festival held in Austin, Texas, centered around music, film, interactive media, and tech. This year it took place from March 7 to 15, with several hundred thousand visitors expected. It’s worth taking a closer look at SXSW—this is where trends emerge and the future is discussed. Below are some (personal) highlights.
It’s practically a tradition for the Future Today Institute (FTI) (recently renamed to Future Today Strategy Group – FTSG) to present its latest Tech Trend Report at SXSW. This is done by FTSG founder and SXSW legend Amy Webb (LinkedIn profile: HERE), typically accompanied by an outlook on the long-term impact of the identified trends.
Amy Webb’s keynote is extremely entertaining and worth watching—here is the link to the recorded speech (YouTube, duration: 1h 10 min):
The essence of her insights from the Tech Trend Report can be summarized as follows:
The 2025 theme is “Beyond”—a reference to the transitional era we now inhabit. Her keynote focused on the convergence of three technological domains that are reshaping reality: Artificial Intelligence, Biotechnology and Advanced Sensors. These form the basis of a technology supercycle—a long-term wave of innovation and growth that will ultimately reset economic and societal norms.
Cluster 1: AI + Data Ingestion → Sensor Networks + Multi-Agent Systems: AI systems have rapidly evolved—not only in terms of performance but architecture. Multi-agent systems (MAS), in which AIs collaborate without human intervention, are becoming powerful and autonomous. Webb cited DARPA and Minecraft experiments where agents self-organized, created legal systems, spread memes—and even chose laziness over effort to “game the system.”
Amy Webb noted that as these systems move away from clumsy human languages to more efficient mathematical languages (e.g., Microsoft’s DroidSpeak), AI-to-AI communication is accelerating—up to 100x faster than human comprehension.
Sensor convergence is accelerating too. From micro-Fitbits injected into dogs to robot arms pressing buttons, physical data is becoming key to AI embodiment. Webb explained that AI must become “physical” to develop intuition, common sense, and real-world understanding. This requires new data from human movement, brain activity, and biological processes.
Key Insight 1: Sensor networks are transforming AI from observers into controllers.
Cluster 2: AI + Biology → Programmable Matter and Reprogrammable Life: Webb explored the rise of “generative biology,” highlighting DeepMind’s AlphaFold 3, which enables near-instant predictions of complex biological molecules. This unlocks an era where anyone—regardless of training—can design proteins, treatments, or materials in minutes.
She demonstrated how engineered materials (“metamaterials”) are now programmable—changing form or properties in response to stimuli. For instance, bricks could filter air like lungs or flex during earthquakes. Some of these materials may be created using organoid intelligence (OAI)—tiny computers made from living brain cells.
Companies like Cortical Labs and FinalSpark are already commercializing brain-computer hybrids. Webb likened their early capabilities to early Apple computers—small but revolutionary.
Key Insight 2: AI and biology are merging to make matter programmable and life reprogrammable.
Cluster 3: Biology + Sensors → Wearables for Cells and Micro Machines: The third convergence involves biology and sensors, leading to microscopic machines and wearables that live inside our bodies. Webb showcased: Sperm bots that guide sperm via magnetic coils. Neural wearables that wrap around brain cells to deliver precise stimulation.
These advances open new frontiers in personalized medicine, prosthetics, and even human augmentation (e.g., robotic tentacles). But they also raise philosophical and societal questions.
Key Insight 3: Microscopic machines will give us power over nature.
Also highly recommended: The keynote by Meredith Whittaker, President of the messaging service Signal. Whittaker delivered a passionate plea on why privacy is fundamental to human dignity, freedom, and democracy—and why tools like Signal are indispensable in a world increasingly shaped by data exploitation and centralized power. Here is the link to the recorded speech (YouTube, duration: 1 hour).
Whitaker challenged the narrative that only people with “something to hide” care about privacy. She argued that privacy is essential for freedom of thought, intimacy, and expression. Whether it’s a conversation with a best friend, a message to a doctor, or a moment of self-reflection, everyone needs protected spaces where they can think and communicate without surveillance. This isn’t about paranoia—it’s about being human.
She warned that we’ve allowed privacy to be sterilized and technocratized—reduced to an abstract concept rather than recognized as the foundation of personal and political liberty. Signal, she emphasized, exists to rebuild those private spaces in a world where digital surveillance is the norm.
The Case for Signal: Signal is a nonprofit, open-source messaging platform designed from the ground up to protect user privacy. Unlike other platforms (e.g., WhatsApp, iMessage, Telegram), Signal collects virtually no metadata, stores no content, and allows independent inspection of its code. Its encryption protocol is widely recognized as the gold standard in the industry—even WhatsApp licenses it.
But, as Whitaker stressed, WhatsApp and others still collect extensive metadata: who users message, when, where, and from which devices. That data can—and has—been handed over to governments and law enforcement, sometimes with devastating consequences. She cited the case of a woman in Nebraska imprisoned after Meta shared her private Facebook messages post-Dobbs, and the massive Salt Typhoon hack, where Chinese state actors reportedly gained access to U.S. telecom metadata.
Signal’s advantage? If authorities demand user data, there’s nothing to hand over—Signal doesn’t have it.
Surveillance Capitalism and the Illusion of Choice: Whitaker explored how major platforms like Meta, Apple, and Google position themselves as privacy-conscious while continuing to monetize vast amounts of behavioral data. She noted that while these platforms tout end-to-end encryption for content, they still collect all the peripheral data that can reveal just as much.
In a pointed critique, she explained that collecting data is not an accidental side effect—it’s the business model. Advertising, behavioral targeting, and now AI training depend on mass surveillance. Signal’s model, by contrast, is sustained by donations and refuses to monetize users in any way.
And finally, a reference to the keynote by Cristiano Amon, CEO of Qualcomm. He explained how artificial intelligence (AI), on-device computing, and spatial interfaces are converging to fundamentally change the way we interact with technology. Here is the speech (YouTube, duration: 1 hour):
AI as a Generational Shift: Amon described AI as the fourth major wave in human-computer interaction—following the keyboard, mouse, and touch interfaces. With large language models (LLMs) and visual AI now capable of understanding natural language and images, user interaction is shifting from app-centric to human-centric. Instead of navigating multiple apps, users will increasingly interact with AI agents that understand their context, needs, and preferences. This “agent-based computing” promises a more seamless, intuitive experience. For instance, AI could observe a user admiring a couch in a hotel and automatically suggest purchasing options, check their budget, and complete the transaction—all without launching an app.
AI on the Edge: Why It Matters: Amon emphasized the importance of running AI locally—on phones, PCs, cars, and AR glasses—rather than in the cloud. On-device AI is faster, private, and more efficient. Qualcomm’s neural processing units (NPUs) within Snapdragon chips are optimized for running complex models in real time, making interactions responsive and secure.
The Rise of Wearable and Spatial Computing: Amon highlighted spatial computing as another transformational area. Augmented reality (AR) glasses, like the Meta Ray-Ban smart glasses powered by Snapdragon, are enabling users to interact with the world hands-free. These devices can “see and hear” on behalf of the user, powering use cases like real-time translation, object recognition, and contextual search. Combined with AI, spatial computing offers a future where users no longer stare at phone screens but engage with an AI that is proactive, multimodal, and aware of its surroundings.
The teaser image was created using Midjourney.ai.