Click here to download the Tech Trend Report 2025
At SXSW, FTSG founder and SXSW icon Amy Webb (For more about Amy Webb, check out her Linked-In Profile) unveiled the latest edition of the Tech Trend Report. In her keynote, she offered a compelling and thought-provoking look into what the future might hold. Watch on YouTube, Duration: 1h 10 min:
The 2025 theme is “Beyond”—a reference to the transitional era we now inhabit. Her keynote focused on the convergence of three technological domains that are reshaping reality: Artificial Intelligence, Biotechnology and Advanced Sensors. These form the basis of a technology supercycle—a long-term wave of innovation and growth that will ultimately reset economic and societal norms.
Cluster 1: AI + Data Ingestion → Sensor Networks + Multi-Agent Systems: AI systems have rapidly evolved—not only in terms of performance but architecture. Multi-agent systems (MAS), in which AIs collaborate without human intervention, are becoming powerful and autonomous. Webb cited DARPA and Minecraft experiments where agents self-organized, created legal systems, spread memes—and even chose laziness over effort to “game the system.”
Amy Webb noted that as these systems move away from clumsy human languages to more efficient mathematical languages (e.g., Microsoft’s DroidSpeak), AI-to-AI communication is accelerating—up to 100x faster than human comprehension.
Sensor convergence is accelerating too. From micro-Fitbits injected into dogs to robot arms pressing buttons, physical data is becoming key to AI embodiment. Webb explained that AI must become “physical” to develop intuition, common sense, and real-world understanding. This requires new data from human movement, brain activity, and biological processes.
Key Insight 1: Sensor networks are transforming AI from observers into controllers.
Cluster 2: AI + Biology → Programmable Matter and Reprogrammable Life: Webb explored the rise of “generative biology,” highlighting DeepMind’s AlphaFold 3, which enables near-instant predictions of complex biological molecules. This unlocks an era where anyone—regardless of training—can design proteins, treatments, or materials in minutes.
She demonstrated how engineered materials (“metamaterials”) are now programmable—changing form or properties in response to stimuli. For instance, bricks could filter air like lungs or flex during earthquakes. Some of these materials may be created using organoid intelligence (OAI)—tiny computers made from living brain cells.
Companies like Cortical Labs and FinalSpark are already commercializing brain-computer hybrids. Webb likened their early capabilities to early Apple computers—small but revolutionary.
Key Insight 2: AI and biology are merging to make matter programmable and life reprogrammable.
Cluster 3: Biology + Sensors → Wearables for Cells and Micro Machines: The third convergence involves biology and sensors, leading to microscopic machines and wearables that live inside our bodies. Webb showcased: Sperm bots that guide sperm via magnetic coils. Neural wearables that wrap around brain cells to deliver precise stimulation.
These advances open new frontiers in personalized medicine, prosthetics, and even human augmentation (e.g., robotic tentacles). But they also raise philosophical and societal questions.
Key Insight 3: Microscopic machines will give us power over nature.
Spotlight Artificial Intelligence
The topic of AI spans 107 pages within the 1,000-page Tech Trend Report. The table of contents alone provides a clear sense of the breadth and depth covered. The dedicated chapters include: “Models, Techniques, Research,” “Safety, Ethics, and Society,” “AI and Energy,” “AI Geopolitics, Defense and Warfighting,” “Policy and Regulation,” “Emerging Capabilities” (e.g., AI in mathematics, emotion detection), “The Business of AI,” “Talent and Education,” “Creativity and Design,” and a focused look at key industries such as pharmaceuticals, healthcare, science, finance, and insurance.
From this wealth of analysis, I’ve chosen to highlight the chapter on “Creativity and Design.” Below is an overview of its key points—especially relevant from my perspective as a power user of tools like Midjourney, Runway, Sora, and others.
The intersection of artificial intelligence (AI) with creativity and design is undergoing a fundamental transformation. We’re witnessing a redefinition of who can create, what can be created, and how the creative process unfolds. The chapter opens with an assertion that 2023 and 2024 were “creative years,” not merely due to an uptick in output but because generative AI expanded participation in the act of creation. Tools like ChatGPT, Midjourney, Runway, and others drastically lowered the barriers for people without traditional design, writing, or artistic training to generate impressive content quickly and affordably. This marks a pivotal shift in the creative landscape—one where AI is no longer just a tool but a collaborator.
Reimagining the Creative Process: The report describes how generative AI is altering the sequence of creativity. Historically, designers and artists moved from ideation to prototyping to testing. But with AI, creators often jump straight into prototyping using text-to-image or text-to-video tools, reversing the traditional pipeline. The result is a non-linear, more exploratory creative process.
An example highlighted is the film industry, where AI is now used to visualize script concepts before a director or cinematographer commits to a particular style or approach. This capability to “see” the idea before it’s built empowers stakeholders to offer earlier feedback and make more informed decisions – essentially compressing timelines while boosting creativity.
Additionally, workflows are becoming multi-modal. A single idea might begin as text, become a storyboard, evolve into a rendered video, and end as an interactive virtual experience – all facilitated by AI. The distinction between designer, writer, artist, and coder blurs. This has implications not just for creative roles, but for team structures and education.
Generative AI as a Design Partner: Rather than viewing AI as a replacement for human creativity, the chapter frames it as a co-designer. AI helps surface ideas that human creators may not have considered. For example, an architect might input constraints and inspirations into an AI tool and receive a dozen viable concepts—some even “unimaginable” from a human-only perspective. This aligns with the concept of computational creativity, where the goal is not just automation but augmentation.
The key, according to the chapter, is intention – human creators set the vision and goals, while AI amplifies possibilities and streamlines execution. The creative mind shifts from “maker” to “curator,” steering the outputs and filtering the best results.
Future of Tools: Multi-Agent and Personal AI: Looking ahead, the report discusses the evolution of creative tools. One major shift involves multi-agent systems—AI collaborators that specialize in different domains. For example, in a design project, one AI might handle layout, another typography, and a third could simulate user interactions or accessibility checks. These agents can work in parallel, mimicking a creative team and enabling faster, more dynamic design work.
In parallel, the rise of personalized AI agents is expected to further individualize creative workflows. These agents will understand a designer’s preferences, inspirations, and past work, offering suggestions and generating content that aligns with their unique style. This could help mitigate some of the homogenization risks noted earlier.
The integration of AI into AR/VR and spatial computing environments is also highlighted. Designers will be able to sketch in three dimensions, manipulate objects with gestures, and collaborate in shared virtual spaces – pushing the boundaries of how and where creativity happens.