artificial intelligence | design and technology news and projects https://www.designboom.com/tag/artificial-intelligence/ designboom magazine | your first source for architecture, design & art news Fri, 13 Jun 2025 10:22:58 +0000 en-US hourly 1 poetry camera writes and prints poems about the people and objects it captures https://www.designboom.com/technology/poetry-camera-writes-prints-poems-captures-ai-claude-06-12-2025/ Thu, 12 Jun 2025 09:50:55 +0000 https://www.designboom.com/?p=1138553 reminiscent of an instant camera, there’s a slit below the lens where receipt-like paper prints out the poems.

The post poetry camera writes and prints poems about the people and objects it captures appeared first on designboom | architecture & design magazine.

]]>
Poetry Camera can write poems with the help of AI

 

Poetry Camera makes and prints poems about the people, objects, and surroundings it photographs using AI. Reminiscent of an instant camera, there’s a large, protruding camera lens on the boxy frame, which scans the subject. Below it, there’s a slit where the receipt-like paper prints out the poems that the AI Poetry Camera digitally pens. The design looks comical with the sizeable shutter button and viewfinder, but there’s also a nostalgic sense about it, knowing that it gives the user stanzas on a piece of paper rather than projecting them on a screen.

 

The AI language model that the Poetry Camera uses is from Anthropic named Claude 4. It’s the reason the device can write poems almost instantly using literary language. The user can choose the type of AI-generated poem they want, from haiku, sonnet, and limerick to alliteration and free verse, using the built-in knob. So far, the images and poems aren’t stored digitally on the Poetry Camera, meaning that the only copy the user has is the printed receipt.

poetry camera poems AI
all images courtesy of Poetry Camera | photo by Kaylee Pugliese/RISD

 

 

Device needs wifi connection to work

 

Kelin Zhang and Ryan Mather, the masterminds behind the Poetry Camera, have a ‘microfactory’ in New York where they assemble the device by hand. It’s a small team because they want to individually piece the parts together to add their personal touch to the end result. The frame is made of vacuum-cast plastic housings, and the device runs on a Raspberry Pi Zero 2 W with a Raspberry Pi Camera Module 3. There’s a catch, however. The device can’t work without a WiFi connection. It relies on it to make the language model work and start churning out AI-made poems. The team says that the Poetry Camera that can write poems doesn’t train the AI model it uses. About Anthropic’s Claude 4, they add that they ‘care to pick reputable AI model providers that do not train on your data.’ 

 

The design team says that the device is a toy for creative expression. Ryan Mather shares with designboom that he and Kelin Zhang choose printed poetry instead of photos ‘to invite you to slow down and appreciate the world around you. It’s kind of like how people tend to prefer the book version of a movie more than the movie version because their memories and imagination feel more personal and special.’ For those who are crafty, they can build the device on their own, as Kelin Zhang and Ryan Mather make it open-source. In case the user isn’t so handy, they can just order the Poetry Camera and allow it to write and print the poems on a piece of paper.

poetry camera poems AI
there’s a slit below the lens where the receipt-like paper prints out the poems

poetry camera poems AI
sample AI-generated poem by the device

poetry camera poems AI
the design team says that the device is a toy for creative expression

the frame is made of vacuum-cast plastic housings
the frame is made of vacuum-cast plastic housings

the AI language model that the Poetry Camera uses is from Anthropic named Claude 4
the AI language model that the Poetry Camera uses is from Anthropic named Claude 4

poetry-camera-write-print-poems-designboom-ban

the team assembles the parts by hand in New York

view of the poem printed on a receipt-like paper | photo by Sam McAllister / Anthropic
view of the poem printed on a receipt-like paper | photo by Sam McAllister / Anthropic

first test assembly with the new main board
first test assembly with the new mainboard

poetry-camera-write-print-poems-designboom-1800

view of the redesigned camera with a knob that lets users choose the type of poem they want

 

project info:

 

name: Poetry Camera | @poetry.camera

design: Kelin Zhang, Ryan Mather

research: Anthropic | @anthropicai

language model: Claude 4

files: here

The post poetry camera writes and prints poems about the people and objects it captures appeared first on designboom | architecture & design magazine.

]]>
digital media fair ArtMeta brings robots, NFTs and AI art into basel’s historic heart https://www.designboom.com/art/digital-media-fair-artmeta-robots-nft-artificial-intelligence-ai-art-basel-06-10-2025/ Tue, 10 Jun 2025 09:50:13 +0000 https://www.designboom.com/?p=1136243 with exhibitions, robots, and conferences led by global voices in art and culture, digital art mile invites everyone to rethink the boundaries of art in a digital age.

The post digital media fair ArtMeta brings robots, NFTs and AI art into basel’s historic heart appeared first on designboom | architecture & design magazine.

]]>
artmeta 2025 arrives in basel

 

From June 16 to 22, 2025, Basel becomes home to the inaugural edition of Digital Art Mile— a new and ambitious initiative by ArtMeta that transforms the historic Rebgasse district into a vibrant epicenter for digital creativity. This week-long event runs in parallel with Art Basel and offers a curated alternative that addresses a conspicuous absence: digital art. Spread across Space25, the 4th Floor, and Kult.Kino Cinema, the fair gathers an international network of artists, curators, collectors, and technologists to explore how digital media reshapes the canon of contemporary art. With exhibitions, robotic installations, and conferences led by global voices in art and culture, Digital Art Mile invites both industry professionals and curious publics to rethink the boundaries of art in a digital age.

 

For first-time visitors, Digital Art Mile offers a paradigm shift. From interactive to historically rich displays, the fair seeks to challenge preconceptions. ArtMeta seeks to convince the skeptics that digital art isn’t just about speculation and NFTs— it’s about a rich, evolving art form rooted in dialogue and human imagination.


From June 16 to 22, 2025, Basel becomes home to the inaugural edition of Digital Art Mile | all images courtesy of ArtMeta

 

 

the fair introduces the digital art mile 

 

ArtMeta, co-founded by curator and digital art pioneer Georg Bak and digital entrepreneur Roger Haas, is carving out a distinct path for how digital art is experienced, understood, and collected. The platform originally emerged from their mutual desire to elevate digital art beyond novelty, rooting it instead within a broader historical and cultural narrative.

 

For its 2025 Basel edition, ArtMeta introduces the Digital Art Mile, conceived as a boutique fair with curated exhibitions and educational programming. Unlike conventional commercial events, its focus lies in thematic cohesion and historical dialogue, linking the legacy of early digital pioneers to the cutting edge of blockchain, AI, and Web3. Through its growing curatorial reach, ArtMeta positions itself as an anchor point in the evolving landscape of digital-native cultural production.


Hackatao – PAINTBOX – Primitives (2025)

 

 

artists, curators, collectors, and technologists all meet in basel

 

Digital Art Mile 2025 offers an immersive entry point into the pluralistic worlds of digital art, from generative image-making and robotics to blockchain-based collecting and AI-driven creativity. This edition’s programming explores intersections between human expression and machine logic, between analog legacy and virtual futures. Beyond exhibitions, the fair includes a four-day conference series at Kult.Kino Cinema that brings together leading thinkers such as Christiane Paul (Whitney Museum), Ian Charles Stewart (Toledo Museum Labs), Sebastien Borget (The Sandbox), and Prof. Dr. Thomas Girst (BMW). Through these multi-perspective discussions, the fair aims not only to showcase the state of digital art but also to create frameworks for its institutional integration, economic viability, and cultural resonance.


Bryan Brinkman in the studio of Adrian Wilson


Bryan Brinkman – Love Bytes (2025)

 

 

 

A central highlight at Rebgasse 25 is the ‘Paintboxed’ exhibition, a landmark collaboration between ArtMeta, Objkt, and the Tezos Foundation. It resurrects the Quantel Paintbox, a pioneering digital painting tool from the 1980s, celebrated for its pivotal role in transforming visual culture—from MTV graphics to the iconic posters of ‘Pulp Fiction’ and ‘The Silence of the Lambs.’ Paintboxed positions this forgotten chapter of digital history in conversation with the present.

 

Artists including Justin Aversano, Grant Yun, Ivona Tau, Hackatao, and Simon Denny were invited to create new works using one of the few remaining functional Paintboxes. Tau even collaborated with ChatGPT to receive step-by-step generative painting instructions, blurring the boundaries between human intuition and AI guidance. These new creations are displayed in lightboxes and paired with NFTs minted on the Tezos Foundation blockchain, allowing collectors to own dual manifestations of the same work—both analog and digital.


Sabato Visconti – Mecha Rosie (2025)


Coldie, Keith Haring – Decentral Eyes (2025)

 

 

Located at Rebgasse 31, the 4th Floor reimagines a former warehouse as a future-forward gallery ecosystem, hosting some of the most experimental names in the space. Objkt.com presents ‘We Emotional Cyborgs: On Avatars and AI Agents,’ curated by Anika Meier—a provocative exploration of virtual identity and post-human aesthetics. Robotic artworks take center stage in Bright Moments’ ‘Automata,’ which includes autonomous painting machines creating works in real-time. Historic pioneers such as Waldemar Cordeiro, Manfred Mohr, and Joan Truckenbrod are spotlighted by Mayor Gallery, RCM, and Galerie Charlot, positioning digital art within a longer, often overlooked lineage.

 

Other participants include The Sigg Art Foundation, Cypherdudes, LaCollection, and Sarasin Foundation, each offering unique vignettes into contemporary crypto culture. A lounge hosted by Tezos Foundation offers a space to engage with the underlying technology.


Exhibition view 2024 – Aleksandra Jovanovic 2 (2025)

 

 

Digital Art Mile expands its cultural footprint with a robust conference series held at Kult.Kino Cinema on June 17 and 18. The talks tackle vital topics such as the role of digital art in museums, the evolution of AI-generated creativity, and how corporations are adopting NFTs and digital aesthetics into their branding and storytelling. Notable sessions include ‘Digital Art in Museums’ featuring Christiane Paul and Ian Charles Stewart, and ‘Digital Art in Corporations,’ moderated by designboom, with insights from BMW’s Prof. Dr. Thomas Girst and Sandbox’s Sebastien Borget. According to Bak, these sessions aim to close the gap between the institutional canonization of digital art and the vibrant discourse happening on social media. A particular point of interest is the integration of crypto culture in legacy institutions and how corporate players like UBS, Arab Bank, and luxury brands are shaping their own digital art narratives.

 

By building a space where curated exhibitions meet educational discourse, the fair aspires to become the leading marketplace and forum for digital art worldwide. Looking ahead, ArtMeta plans to expand its editorial output and continue fostering deeper conversations across cities and continents.


Adrian Wilson – Team For Hair 1985


Kiki Picasso, Fondateur de Quantel – Peter Michael par Kiki Picasso (2025)


Adrian Wilson – GPB Collage 1986


OMGiDRAWEDit, So Revival, 2025

 

 

project info:

 

name: Digital Art Mile
organization:
ArtMeta | @artmetaofficial

dates: June 16 – 22, 2025

location: Rebgasse, Basel, Switzerland

The post digital media fair ArtMeta brings robots, NFTs and AI art into basel’s historic heart appeared first on designboom | architecture & design magazine.

]]>
sónar+D discusses quantum science in art, music by AI & future of creatives in series of talks https://www.designboom.com/technology/sonar-d-discusses-ai-music-art-geopolitics-series-talks-06-09-2025/ Mon, 09 Jun 2025 10:30:10 +0000 https://www.designboom.com/?p=1137423 part of the talk and forum programs happen on the mornings of june 12th and 13th, before the sónar 2025 festival opens to the public at 3pm.

The post sónar+D discusses quantum science in art, music by AI & future of creatives in series of talks appeared first on designboom | architecture & design magazine.

]]>
sónar+D 2025 talks about art, music and creative industries

 

Sónar+D addresses the use of quantum science in art, making music with AI, experimental video games in performances, and what the future looks like in the creative industries in a series of talks at Sónar 2025. The event runs from June 12th to 14th, 2025, at Fira Montjuïc in Barcelona, Spain, as part of the annual electronic music and digital art festival. On these days, over 100 lectures, exhibitions, workshops, and performances take place at once. designboom also hosts discussions during the festival, interviewing artists Yolanda Uriz, Dmitry Morozov aka ::vtol::, and George Moraitis on their practice and the making of their modern art, sound performances, and stage designs.

 

Yolanda Uriz uses physical phenomena, vibration, electromagnetic waves, and chemical molecules to decode sound, light, and smells in her installations and performances. This adoption of sound is also present in Dmitry Morozov aka ::vtol::’s robotics and installation, placing emphasis on the link between emergent systems and new kinds of technological synthesis. Even George Moraitis works with sound to narrate memory, experience, and a sense of history through sound art, audiovisual installations, and two-dimensional works and performance. Designboom’s talks take place on June 12th from 5:30pm. Historically, Sónar+D was established in 2013 as a platform for creatives to examine the ways technology, and now AI, influences art, music, and even society. This edition’s conferences focus on three main thematic areas: AI + Creativity, Futuring the Creative Industries, and Worlds to Come.

 

Meet us at Sónar+D – tickets here!

AI music art sónar+D
images courtesy of Sónar, unless stated otherwise | photo by Cecilia Diaz Betz

 

 

AI + Creativity explores the politics of the new technology

 

For the AI + Creativity during Sónar+D (tickets are available here), the section explores how creatives can use AI production, music, and audiovisual design. The talks also dive into the ethical and political aspects of artificial intelligence. They complement the other creative interviews in other sections, including designboom’s conversations with multi-sensory artist Yolanda Uriz, transdisciplinary artist Dmitry Morozov aka ::vtol::, and multimedia artist George Moraitis. The discussion starts with Introducing AI & Music powered by S+T+ARTS, a forum that leads the discussion of AI and sonic creativity. In another room, Libby Heaney performs Eat my Multiverse performance using quantum computing for visuals, sounds, and music development. Jordi Pons’ Artistic Trends, Music & AI discusses new musical genres and sonic structures from AI algorithms, while Rebecca Fiebrink hosts Design your dream music AI tool, a session on AI tool design accessible to users without programming knowledge. 

 

Joanne Armitage’s Automating Bodies: Power, Music and AI explores the power dynamics when users adopt AI for creative production. The talk includes examining gender bias in algorithmic music, too. It’s about machine learning and treating it as ‘resonant entities’ in Marije Baalman’s A Musical Understanding of AI as Resonance, while there’s also a masterclass on using real-time audio machine learning for culturally specific sound with Lamtharn (Hanoi) Hantrakul, known as ญาบอยฮานอย (yaboihanoi). The viewers can, or should participate in AI Performance Playground, an AI & Music Hacklab that allows visitors to use AI as an actual instrument. In this section, +RAIN Film Festival also shows films produced with AI models and AudioStellar’s Territorios sonoros emergentes demonstrates how motion tracking and AI can power dance for visual and sonic performances. At the same time, Maria Arna premieres Ama, a live musical performance using AI with the human voice.

AI music art sónar+D
Sónar+D addresses the impact of AI in music, art and more through a series of talks during Sónar 2025

 

 

Discussions on present and future of creative industries

 

Inside the Futuring the Creative Industries section, conversations spotlight the changes and opportunities within the creative sector amidst new technologies including, but not limited to, AI in music and art, cultural management, communication, advertising, experience design, and trend research. The ‘How to Future the Creative Industries’ forum features experts from institutions like the New Museum, HERVISIONS, Onassis Foundation, Serpentine Gallery, NewArt Foundation, LAS Foundation, Kapelica, gnration, and Tabakalera.

 

The session explores the role of cultural institutions in sharing new ideas and trends within a media-saturated environment. Trend analysts Berta Segura and Francesca Tur host ‘Hacking the World,’ which analyzes how marketing, geopolitics, technology, and digital culture transform creator profiles, audience formation, and artist-public interaction. The intersection of cultural heritage and digital technology is explored through ‘Lux Mundi,’ an audiovisual experience reinterpreting Romanesque fresco paintings. Artists Alba G. Corral, Massó, Desilence, and Hamill Industries collaborate with Tarta Relena for this Generalitat of Catalonia initiative.

AI music art sónar+D
the event runs from June 12th to 14th, 2025, at Fira Montjuïc in Barcelona | photo by Nerea Coll

 

 

Still inside the Futuring the Creative Industries section, creative collaboration and technology integration are also central. TIMES, a European network, presents ‘The Crossing’ with contributions from Margarida Mendes, Chris Watson & Izabella Dłużyk, and Saint Abdullah, Eomac & Rebecca Salvadori. Arts Korea Lab hosts ‘Future Thinking,’ where Korean creators like WOMAN OPEN TECH LAB, Earth-topia, Seungsoon Park, Hwia Kim, and Tae Eun Kim present their projects. AlphaTheta showcases its euphonia rotary mixer and virtual reality DJ suite. Music2.0 and JSPA explore the history of Japanese synthesizers.

 

MusicTech Europe, in collaboration with Barcelona Music Tech Hub, features the Music Tech Europe Academy startup presentations and ‘MusicTech Dialogues’ on data use in the creative economy. The event also includes interviews with artists and participants. Designboom interviews Yolanda Uriz, Dmitry Morozov aka ::vtol::, and George Moraitis in Lounge+D, and Time Out London also hosts live interviews. W1 Curates presents art and music collaborations on the screens of Stage+D, featuring artists such as Max Cooper and Goldie.

AI music art sónar+D
Actress & Suzanne Ciani present ‘Concrète Waves’ during Sónar by Day at Stage Complex+D

 

 

The last is the Worlds to Come, a thematic area that explores speculative futures and human-technology interfaces. It examines the relationship of today’s technology with culture and society. Quantum computing and non-binary perspectives are examined in Libby Heaney’s ‘Eat My Multiverse’, which uses quantum computing in an artistic context. This presentation focuses on re-evaluating current global conditions. Space exploration is a recurring theme. Xin Liu’s ‘Cosmic Metabolism’ discusses scientific and poetic elements of her work, including her personal genome exhibit, ‘A Book Of Mine’. The program also investigates human interaction with technology and environment. Albert.DATA’s ‘SYNAPTICON’ performance demonstrates real-time brain activity using brain-computer interfaces. 

 

Danielle Braithwaite-Shirley’s ‘WE CAN’T PRETEND ANYMORE’ offers an interactive digital narrative exploring the history of Black trans individuals. Tega Brain’s ‘Questions of Automation’ addresses digital sustainability through creative coding and DIY strategies, highlighting political and environmental concerns. Discussions extend to social innovation and community building. ‘Portals: Talks of Worlds to Come’, presented by The Social Hub, features a panel of experts discussing design, sustainability, and cultural innovation in shared spaces. The program also includes performances, such as Luis Garbán (Cardopusher) with ‘DESTRUCCIÓN’, an audiovisual project combining reggaeton, industrial, and breakcore. Each of these talks and forums contributes to the overall purpose of Sónar+D, which is to create a space for knowledge exchange between different professional fields. These programs coincide with the Sónar 2025 festival, which runs between June 12th and 14th.

AI music art sónar+D
Stage+D by MEDIAPRO, Playmodes, UPC-Telecos present Astres | photo by Nerea Coll

AI music art sónar+D
Lux Mundi installation by by Alba G.Corral, Massó, Desilence & Hamill Industries with Tarta Relena at Sónar+D

sónar+D-discussions-AI-music-art-geopolitics-talks-barcelona-designboom-ban

Sónar+D shows a replica of the apse of Sant Climent de Taüll to host Lux Mundi

Yolanda Uriz's Chemical Calls of Care
Yolanda Uriz’s Chemical Calls of Care | image courtesy of Yolanda Uriz

Chemical Calls of Care (2024), an interactive installation on audio-olfactory communication | image courtesy of Yolanda Uriz
Chemical Calls of Care (2024), an installation on audio-olfactory communication | image courtesy of Yolanda Uriz

sónar+D-discussions-AI-music-art-geopolitics-talks-barcelona-designboom-ban2

Edge is a kinetic, sound and light object | image courtesy of ::vtol::

iPot is a device for performing a digital tea ceremony | image courtesy of ::vtol::
iPot is a device for performing a digital tea ceremony | image courtesy of ::vtol::

Schematic by George Moraitis | image courtesy of George Moraitis
Schematic by George Moraitis | image courtesy of George Moraitis

sónar+D-discussions-AI-music-art-geopolitics-talks-barcelona-designboom-ban3

Xe by George Moraitis | image courtesy of George Moraitis

 

project info:

 

event: Sónar 2025 | @sonarfestival

program: Sónar+D

location: Palau de Congressos de Fira Montjuïc, Barcelona, Spain

dates: June 12th to 14th, 2025

photography: Cecilia Diaz Betz, Nerea Coll | @ceciliadiazbetz, @nereacoll

entry: tickets here

The post sónar+D discusses quantum science in art, music by AI & future of creatives in series of talks appeared first on designboom | architecture & design magazine.

]]>
why AI language models like chatGPT and gemini can’t understand flowers like humans do https://www.designboom.com/technology/ai-language-models-chatgpt-gemini-understand-flowers-ohio-state-university-06-04-2025/ Wed, 04 Jun 2025 21:50:13 +0000 https://www.designboom.com/?p=1137193 this study suggests that large language models cannot represent human concepts, where senses of actions are involved, without experiencing the world through the body.

The post why AI language models like chatGPT and gemini can’t understand flowers like humans do appeared first on designboom | architecture & design magazine.

]]>
ohio state university researchers consider capacity of ai models

 

Imagine learning the concept of a flower without ever smelling a rose or brushing your fingers across its petals. We might be able to form a mental image or describe its characteristics, would we still truly understand its concept? This is the essential question tackled in a recent study by The Ohio State University which investigates whether large language models like ChatGPT and Gemini can represent human concepts without experiencing the world through the body. The answer, according to the Ohio researchers and collaborating institutions, is that this isn’t entirely possible.

 

The findings suggest that even the most advanced AI tools still lack the sensorimotor grounding that gives human concepts their richness. While large language models are remarkably good at identifying patterns, categories, and relationships in language, often outperforming humans in strictly verbal or statistical tasks, the study reveals a consistent shortfall when it comes to concepts rooted in sensorimotor experience. And so, when a concept involves senses like smell or touch, or bodily actions like holding, moving, or interacting, it seems that language alone isn’t enough.

why AI language models like chatGPT and gemini can’t understand flowers like humans do
all images courtesy of Pavel Danilyuk via Pexels | @rocketmann_team

 

 

chatgpt & gemini might not fully grasp the concept of a flower

 

The researchers at The Ohio State University tested four major AI models — GPT-3.5, GPT-4, PaLM, and Gemini — on a dataset of over 4,400 words that humans had previously rated along different conceptual dimensions. These dimensions ranged from abstract qualities like ‘imageability’ and ‘emotional arousal,’ to more grounded ones like how much a concept is experienced through the senses or through movement.

 

Words like ‘flower’, ‘hoof’, ‘swing’, or ‘humorous’ were then scored by both humans and AI models for how well they aligned with each dimension. While large language models showed strong alignment in non-sensorial categories such as imageability or valence, their performance dropped significantly when sensory or motor qualities were involved. A flower might be recognized as something visual, for instance, but the AI struggled to fully represent the integrated physical experiences that most people naturally associate with it. ‘A large language model can’t smell a rose, touch the petals of a daisy, or walk through a field of wildflowers,’ says Qihui Xu, lead author of the study. ‘They obtain what they know by consuming vast amounts of text — orders of magnitude larger than what a human is exposed to in their entire lifetimes — and still can’t quite capture some concepts the way humans do.’

why AI language models like chatGPT and gemini can’t understand flowers like humans do
investigating whether large language models like ChatGPT and Gemini can accurately represent human concepts

 

 

the role of the senses and bodily experience in thought

 

The study, recently published in Nature Human Behaviour, taps into a long-ongoing cognitive science debate which questions whether we can form concepts without grounding them in bodily experience. Some theories suggest that humans, particularly those with sensory impairments, can build rich conceptual frameworks using language alone. But others argue that physical interaction with the world is inseparable from how we understand it. A flower in this context is perceived beyond its form as an object. It is a set of sensory triggers and embodied memories, for instance the sensation of sunlight on your skin or the moment of stopping to sniff a bloom, which comes with emotional associations with gardens, gifts, grief, or celebration. These are multimodal, multisensory experiences, and this is something current language models like Chat GPT and Gemini, trained mostly on internet text, can only approximate.

 

Speaking to their capacity, however, in one part of the study shows that AI models accurately linked roses and pasta as both being ‘high in smell.’ But humans are unlikely to think of them as conceptually similar because we don’t just compare objects by single attributes, but we make use of a multidimensional web of experiences that includes how things feel, what we do with them, and what they mean to us.


the study by The Ohio State University suggests that these AI models cannot understand sensorial human experiences

 

 

the future of large language models and embodied ai

 

Interestingly, the study also found that models trained on both text and images performed better in certain sensory categories, particularly in dimensions related to vision. This hints at future scenarios in which multimodal training (combining text, visuals, and eventually sensor data) might help AI models get closer to human-like understanding. Still, the researchers are cautious. As Qihui Xu notes, even with image data, AI lacks the ‘doing part, which consists of how concepts are formed through action and interaction.

 

Integrating robotics, sensor technology, and embodied interaction could eventually move AI toward this kind of situated understanding. But for now, the human experience remains far richer than what language models — no matter how large or advanced — can replicate.


in one part of the study AI models accurately linked roses and pasta as both being ‘high in smell’

 

project info:

 

language models: Gemini, ChatGPT

companies: Google, OpenAI | @google, @openai

photography: Pavel Danilyuk | @rocketmann_team

The post why AI language models like chatGPT and gemini can’t understand flowers like humans do appeared first on designboom | architecture & design magazine.

]]>
mushrooms and machine learning shape studio weave’s intelligent garden in chelsea https://www.designboom.com/architecture/mushrooms-machine-learning-studio-weave-intelligent-garden-chelsea-flower-show-london-06-04-2025/ Wed, 04 Jun 2025 06:45:16 +0000 https://www.designboom.com/?p=1137043 studio weave's intelligent garden presents a compostable building and AI-supported planting scheme at the chelsea flower show 2025.

The post mushrooms and machine learning shape studio weave’s intelligent garden in chelsea appeared first on designboom | architecture & design magazine.

]]>
an intelligent Garden Alive with Data

 

In a quiet corner of London’s Chelsea Flower Show, Studio Weave’s Avanade Intelligent Garden pulses beneath the textures of bark and lush foliage. The project gathers and interprets signals from its plants, soil, and air to form an AI-driven ecosystem that listens as much as it grows. The English architects, in collaboration with landscape designer Tom Massey and natural materials expert Sebastian Cox, has created an architectural presence within the garden that reflects both ecological knowledge and digital intuition. The result is a place of learning, adjusting, and responding that’s alive with signals and wrapped in a facade of mushroom mycelium.

 

This year’s gold medal-winning entry comes from a carefully tuned partnership. Massey’s planting scheme, Cox’s material intelligence, and Studio Weave’s architectural framing find coherence through a shared interest in craft and care. Rather than standing apart, the building acts as a lightly held edge. It folds around the perimeter, creating an inner clearing that functions like a micro-courtyard — a calm interior within the lush density of the Intelligent Garden.

studio weave intelligent garden
images © Daniel Herendi

 

 

a form informed by mushroom mycelium

 

Studio Weave‘s shed structure within the Intelligent Garden rises from materials that carry their own narratives. Ash timber, harvested from diseased trees in local forests, has been woven and curved to shape the outer skin. Between the slats, natural light lands on the softly undulating surface of mycelium panels. These fungal forms, grown in Sebastian Cox’s Kent workshop from agricultural byproducts, bring both tactile richness and a low-impact material footprint. Together they form a type of garden architecture that feels grown as much as it is built.

 

This intervention carries more function than its restrained form suggests. It provides shelter and workspace for its gardener-custodians — people tasked with tending the Tom Massey-designed garden and managing the technology embedded within it. Avanade’s AI platform gathers live data on soil health, humidity, and light exposure, offering caretakers a nuanced picture of how each tree and plant responds to its environment. The table inside serves both workshop and observation, reinforcing the idea that care and technology must coexist at a very human scale.

studio weave intelligent garden
Studio Weave collaborates with Tom Massey and Sebastian Cox to create the Intelligent Garden

 

 

studio weave Designs for Disassembly

 

Tucked within the structure is a shaded, humid corner that leans into the mystery of the mycelium. Here, in conditions designed for growth rather than display, the garden shows its quieter work. Fungal fruiting bodies emerge in their own time, fed by a microclimate that speaks to forest understories. It is a moment of architectural pause, and a reminder that some processes can be invited but never controlled.

 

Though the Intelligent Garden is a temporary installation, the building’s afterlife has been carefully plotted. Prefabricated in four volumes, it was assembled quickly on-site and will move to Manchester’s Mayfield Park after the show. The building’s construction avoids permanence in favor of adaptability. Every joint, weave, and panel has been designed with disassembly in mind. The entire structure is biodegradable or recyclable, with nothing left as waste. It is, in essence, a compostable building.

studio weave intelligent garden
locally-sourced Ash and mycelium emphasize sustainability and material storytelling

 

 

Beyond the structure, the Intelligent Garden makes a pointed case. Trees in urban areas are under threat from poor planting conditions, neglect, and environmental stress. Nearly half fail within ten years. The garden does not offer a single fix. Instead, it puts forward a layered system — where AI is a tool, not a substitute, for long-term stewardship. Through sensors and predictive models, the technology here helps direct limited resources where they’re most needed, supporting both survival and growth over time.

 

This is the second year Studio Weave and Tom Massey have collaborated at Chelsea. Their previous entry also received gold, but this year’s work pushes further into a cross-disciplinary space. Known for projects that engage civic and natural contexts with unusual sensitivity, Studio Weave brings architecture into conversation with planting and performance. The firm’s ability to work fluidly between disciplines is evident in how the structure holds the garden without overwhelming it.

studio weave intelligent garden
the garden integrates AI technology to support the long-term care and survival of urban trees

studio weave intelligent garden
sensors and AI track soil health and environmental data for optimal growing conditions

avanade-intelligent-garden-chelsea-flower-show-studio-weave-tom-massey-sebastian-cox-designboom-06a

an interior courtyard is designated for workshops and quiet observation

studio weave intelligent garden
the building was prefabricated in modular volumes and designed for reuse after the show

avanade-intelligent-garden-chelsea-flower-show-studio-weave-tom-massey-sebastian-cox-designboom-08a

a ‘mushroom parlour’ demonstrates ideal conditions for fungal growth

 

project info:

 

name: Avanade Intelligent Garden and Building

architect: Studio Weave | @studioweave

event: Chelsea Flower Show 2025

location: London, United Kingdom

landscape design: Tom Massey | @tommasseyuk

materials: Sebastian Cox | @sebastiancoxltd

digital systems: Avanade Inc. | @avanadeinc

photography: © Daniel Herendi | @neverordinaryview

The post mushrooms and machine learning shape studio weave’s intelligent garden in chelsea appeared first on designboom | architecture & design magazine.

]]>
opera neon browser has ‘agents’ that can do things on the web for you when you’re offline https://www.designboom.com/technology/opera-neon-browser-ai-agents-web-offline-05-29-2025/ Wed, 28 May 2025 22:01:45 +0000 https://www.designboom.com/?p=1135868 dubbed agentic AI browsing capabilities, there’s a chat feature on the browser that can take on the requests.

The post opera neon browser has ‘agents’ that can do things on the web for you when you’re offline appeared first on designboom | architecture & design magazine.

]]>
AI Agents work for Opera Neon Browser and its users

 

The Opera Neon browser has AI agents that can work on projects and tasks of the users even when they’re offline. Dubbed agentic AI browsing capabilities, there’s a chat feature on the browser that can take on the request, which is the first step. Here, users ask the AI agents to search the web for what they need, receive answers about a question asked, and get a summary of what the webpage they’re on is about. These features aren’t new, however. Many apps on different browsers can do them, too, such as OpenAI’s ChatGPT and Google’s Gemini.

 

What the browser wants to contribute to the table is the ‘automated’ routine web tasks. It means that the Opera Neon AI agents can fill out forms, reserve hotel rooms, book flights, shop, and even create a website for the users when they’re not on their computers. The browser interacts with different websites itself and performs these tasks on its own, all the while ‘preserving users’ privacy and security,’ says the Opera team. With this, AI isn’t only a system, but the ‘chip’ that powers a group of digital, unseen robots that can complete tasks independently.

opera neon browser AI
all images courtesy of Opera

 

 

Three options: chat, do, and make

 

What Opera Neon feels sets it apart from other browsers is the fact that its AI agents can continue working on the users’ creation even when they go offline. People, for example, can ask the browser to make a game, a report, a snippet of code, or even a website. The ever-loyal AI agents of Opera Neon then research, design, and build all these and just forward the final output to the users. The company says that they can also ask the agents to make multiple things or work on different tasks at once. There are three options to choose from once the users are on the browser. They’re named Chat, Do, and Make. With Chat, it has the same function as ChatGPT and Gemini.

 

Do is the one that does the planning, booking, buying, and reserving. The team says this system understands webpages through the DOM tree and layout data, ‘not by analyzing pixels or using a virtual pointer. Because it operates natively on your browser, your data – like browsing history, logins, and cookies – stays private and local.’ Make, then, is the manifestor. It’s the one that can develop a video game or a website with a niche function like comparing stocks. When it faces problems along the way, it tries to correct and fix them itself. There’s almost no external help needed, as the company believes. These ‘hubs’ and AI agents are all present in the Opera Neon browser. A heads-up, however, that they’re only available through the company’s ‘premium subscription product.’

opera neon browser AI
the Opera Neon browser’s AI agents can complete tasks for the users

opera neon browser AI
there are three tabs to choose from, depending on what the user needs

opera neon browser AI
the Opera Neon Browser AI agents work on the tasks even when the users are offline

the browser interacts with different websites itself and performs these tasks on its own
the browser interacts with different websites itself and performs these tasks on its own

people, for example, can ask the browser to make a game, a report, a snippet of code, or even a website
people, for example, can ask the browser to make a game, a report, a snippet of code, or even a website

opera-neon-browser-AI-agents-browse-web-designboom-ban

so far, the browser is available as a premium subscription product

 

project info:

 

name: Opera Neon

company: Opera | @opera

The post opera neon browser has ‘agents’ that can do things on the web for you when you’re offline appeared first on designboom | architecture & design magazine.

]]>
memoria home medical device and necklace help people with alzheimer’s remember https://www.designboom.com/technology/memoria-home-medical-device-necklace-people-alzheimers-futurewave-05-23-2025/ Fri, 23 May 2025 10:50:55 +0000 https://www.designboom.com/?p=1134851 it’s a two-part collection: the first one has a sculptural base station with a large, circular AMOLED screen, while the other is a discreet wearable.

The post memoria home medical device and necklace help people with alzheimer’s remember appeared first on designboom | architecture & design magazine.

]]>
home medical devices for people with Alzheimer’s

 

Futurewave designs Memoria, an AI home medical device with a bracelet or necklace that helps people with Alzheimer’s remember. It’s a two-part collection. The first one has a sculptural base station with a large, circular AMOLED screen. The display projects photos, videos, memory prompts, names, and time so the users can recall who they are or what they’re about. There’s also a discreet wearable. It’s either a bracelet or a necklace. First, users place the smart chips around the house. Then, the wearable device interacts with these chips, giving the user real-time haptic cues and voice feedback to tell them that someone, or something in this case, is with them at all times.

 

The screen of the sculptural base station appears semi-translucent, similar to the ‘privacy’ protectors for smartphones. It helps flash the images and videos vividly, especially when placed in a sunny room. The typeface used is sans-serif, and the font size is large to immediately show the projections to the users. The design resembles a desk fan, and the bottom of the base station can double as a speaker, too. For the wearable, it looks like a small computer mouse. It has a textured skin for the haptic feedback. There’s also a small slot for the speaker. So far, Futurewave’s home medical device and wearable for people with Alzheimer’s is still a concept project.

home medical device alzheimer's
all images courtesy of Futurewave

 

 

AI lends a hand to Futurewave’s memoria 

 

The Futurewave team says that the display of the sculptural base station is AMOLED. It allows for soft visuals and adaptive brightness. The wearable features low-energy Bluetooth as well as NFC capability. These let the device interact with the smart chips installed around the house. Both devices are powered by AI, letting them learn routines, recognize patterns, and adapt prompts to the user’s emotional and cognitive needs. They’re not fully reliant on technology. The home medical devices for people with Alzheimer’s also support outdoor safety and remote caregiver monitoring. In this way, a human can take care of the users even from afar.

 

The design team says they plan to produce the home medical devices for people with Alzheimer’s using recyclable materials. They haven’t identified yet which ones. They’re in collaboration with healthcare experts and caregivers to develop the gadgets. The team adds that their purpose is to support memory recall gently, not by correcting the users, but by guiding them through moments of disorientation with warmth and respect. With this, Memoria is a series of home medical devices for people with early stages of Alzheimer’s, helping them stay connected to their present, memories, routines, and even loved ones.

home medical device alzheimer's
the display projects photos, videos, memory prompts, names, and time

home medical device alzheimer's
the sculptural base station has a large, circular AMOLED screen

the wearable device interacts with these chips, giving the user real-time haptic cues and voice feedback
the wearable device interacts with these chips, giving the user real-time haptic cues and voice feedback

so far, Futurewave’s Memoria is still a concept project
so far, Futurewave’s Memoria is still a concept project

memoria-home-medical-device-necklace-alzheimer's-remember-designboom-ban

detailed view of the sculptural base station

 

project info:

 

name: Memoria

design: Futurewave | @futurewave_design

The post memoria home medical device and necklace help people with alzheimer’s remember appeared first on designboom | architecture & design magazine.

]]>
google’s veo 3 generates AI videos from text with dialogues, voice-overs and sound effects https://www.designboom.com/technology/google-veo-3-generates-ai-videos-text-dialogues-voice-overs-sound-effects-05-22-2025/ Thu, 22 May 2025 10:55:48 +0000 https://www.designboom.com/?p=1134530 the company says that with the new version, users can produce videos from text prompts with ‘improved quality’ as well as speech and audio.

The post google’s veo 3 generates AI videos from text with dialogues, voice-overs and sound effects appeared first on designboom | architecture & design magazine.

]]>
Text to AI videos using Google’s veo 3

 

It’s not quiet anymore in the text-to-AI-generated video world since Google’s Veo 3 can produce them with audio, dialogues, voice-overs, and sound effects. The model comes from Google’s very own Deepmind, and the company says that with Veo 3, users can produce videos from text prompts with ‘improved quality’ as well as speech and audio. It’s a step ahead in the AI video race since OpenAI’s Sora has yet to add the sound feature into its software. 

 

Aside from dialogues and sound effects, users can also apply ambient noise and background music to their AI videos with Google’s Veo 3. The company says that its model follows the text prompts and series of actions with ‘greater accuracy.’ For example, the prompt ‘a paper boat sets sail in a rain-filled gutter’ generates the exact clip in a close-up shot. The water flows, and the boat even tilts as it ends up in the gutter. Unlike the previous versions, the Google Veo 3 has a distinct sharpness that makes it less of an AI video. In some ways, that’s quite alarming.

google veo 3 AI
all images courtesy of Google Deepmind

 

 

The company also releases AI-powered filmmaking app

 

With the boom of the AI videos and apps like Google’s Veo 3, some might take advantage of it to spread disinformation, or the intentional misleading of the public. At the present time, spotting whether a video is AI or not is still fairly easy to do. Usually, the movement of the lips and voice-overs tends to be more delayed than the actors’ facial expressions. There’s also an uncanny feeling to the way the subjects move and blink. They seem slower or smoother, which isn’t the same way in movies and shows shot on cameras.

 

In terms of video quality, the AI videos, even some that have been produced by the company’s Veo 3, have a blurry background that makes the subjects stand out more. It looks as if the software is using the ‘portrait’ mode that’s similar to smartphone cameras. Google has also released Flow, which is an AI-powered filmmaking app tailored to Veo 3. Here, the creatives can control the camera motion, angles, and perspectives of the AI videos. They can also extend the existing shots and add transitions from one angle to another. At the moment, only those with Google AI Ultra subscription, which is around 125 USD per month, can fully access all of Google’s tools, including Veo 3.

google veo 3 AI
the new version can also expand the video size and match the missing parts

google veo 3 AI
with a descriptive prompt, the recent model can produce the exact request

google veo 3 AI
from the previous version, the new model can still embed characters in a defined setting

 

a paper boat sets sail in a rain-filled gutter

users can also control the camera movement

google-veo-3-generate-AI-videos-text-dialogues-voice-overs-sound-effects-designboom-ban

so far, users can fully access the tools via AI Ultra subscription

 

project info:

 

name: Veo 3

company: Google | @google

The post google’s veo 3 generates AI videos from text with dialogues, voice-overs and sound effects appeared first on designboom | architecture & design magazine.

]]>
jony ive works with sam altman to develop openAI’s new tools and design products https://www.designboom.com/technology/jony-ive-sam-altman-openai-tools-design-products-io-05-22-2025/ Thu, 22 May 2025 09:50:23 +0000 https://www.designboom.com/?p=1134513 in an interview, the duo says they’ve already been working on a device, which they describe as ‘the coolest piece of technology that the world will have ever seen.’

The post jony ive works with sam altman to develop openAI’s new tools and design products appeared first on designboom | architecture & design magazine.

]]>
Io by jony ive merges with sam altman’s openAI

 

Jony Ive, LoveFrom studio’s co-founder together with Marc Newman, announces creating AI tools and design products under ‘io’ for Sam Altman and OpenAI. The former Apple Chief Design Officer and AI company founder discuss the upcoming products in an interview. Their collaboration pivots back to 2024. During this year, Jony Ive co-founded io, an engineering and product development company, with his former Apple designers Scott Cannon, Evans Hankey, and Tang Tan. Later on, OpenAI bought io for around 6.4 billion USD, its dubbed biggest acquisition yet. In the interview released by the AI company on May 21st, 2025, Jony Ive and Sam Altman talk about their plans for OpenAI’s io. First off, io is now its own department at OpenAI with its own team of engineers and developers.

 

‘io is merging with OpenAI, formed with the mission of figuring out how to create a family of devices that would let people use AI to create all sorts of wonderful things,’ the duo says. They add that they’ve already been working on a device. While it’s not yet clear what it is, Sam Altman shares that Jony Ive has already given him one of the prototypes to take home. ‘I’ve been able to live with it, and I think it is the coolest piece of technology that the world will have ever seen,’ the OpenAI founder says. So far, their description includes having a magic intelligence in the cloud and possibly testing the ‘limit of what the current tool of a laptop can do’ in terms of how the device operates. OpenAI’s io, by Jony Ive and Sam Altman, plans to push the first series of their collaborative device in 2026.

LP12-50 for Linn | image courtesy of Linn
Jony Ive’s LP12-50 for Linn | image courtesy of Linn; read more here

 

 

new technology that ‘can make us our better selves’

 

As the interview between Jony Ive and Sam Altman progresses, they talk more about the personal inspiration San Francisco, where they’re currently based, gives to them. The OpenAI founder then hints at him wanting to ‘democratize smart tools’, to which the io co-founder replies with ‘what I see you worrying about are other people, are about customers, about society, about culture. And to me, that tells me everything I want to know about someone.’ After that, the duo discuss creating a new generation of technology that ‘can make us our better selves.’ 

 

io by Jony Ive and Sam Altman focuses more on developing products under OpenAI. They’re developing and working with research- and engineering-based devices in San Francisco. Before co-establishing io in 2024, Jony Ive co-founded the collective LoveFrom with his friend and fellow designer, Marc Newson. He joined Apple in 1996 and left the company after two decades as the Senior Vice President of Design. During his tenure, he worked on many of the company’s classic designs, including the iPod, earlier Macbooks, iPhones, and iPads, as well as the iOS 7.

Jony Ive has also worked for Aibnb | image courtesy of Airbnb
Jony Ive has also worked for Aibnb | image courtesy of Airbnb; read more here

jony ive openAI io
iPhone 3Gs next to iPhone 4s | image courtesy of Zach Vega, via Wikimedia Commons

iPod 5th Generation | image courtesy of Mikepanhu, via Wikimedia Commons
iPod 5th Generation | image courtesy of Mikepanhu, via Wikimedia Commons

the duo says they've already been working on their first device's prototype
the duo says they’ve already been working on their first device’s prototype

Jony Ive is the former Apple Senior Vice President of Design
Jony Ive is the former Apple Senior Vice President of Design

jony-ive-sam-altman-openAI-tools-design-products-io-designboom-ban

Jony Ive and Sam Altman discusses OpenAi’s io in an interview

 

project info:

 

name: io 

co-founder: Jony Ive

collective: LoveFrom

company: OpenAI | @openai

founder: Sam Altman

The post jony ive works with sam altman to develop openAI’s new tools and design products appeared first on designboom | architecture & design magazine.

]]>
HWKN’s commercial masterplan in sharjah is the UAE’s first AI-planned district https://www.designboom.com/architecture/hwkn-commercial-masterplan-sharjah-uae-first-ai-planned-district-11-05-21-2025/ Wed, 21 May 2025 10:10:49 +0000 https://www.designboom.com/?p=1134145 each of the buildings has been shaped by AI-generated prompts informed by HWKN’s research into sharjah’s climate, cultural identity, and urban morphology.

The post HWKN’s commercial masterplan in sharjah is the UAE’s first AI-planned district appeared first on designboom | architecture & design magazine.

]]>
sharjah’s district 11 masterplan envisioned as a ‘work resort’

 

In central Sharjah, HWKN is set to develop the UAE’s first AI-planned district featuring offices, cafés, childcare and healthcare facilities, and a mosque. District 11 responds to a specific urban gap the city that is historically residential and institutionally rich, but lacking in integrated commercial hubs. HWKN’s approach proposes a walkable ‘work resort’ that reflects the local context while introducing new forms of professional and social interaction, and integrated living.

 

Each of the masterplan‘s eleven buildings has been shaped by AI-generated prompts informed by HWKN’s research into Sharjah’s climate, cultural identity, and urban morphology. These inputs guided the planning of massing, shading strategies, and spatial configurations, particularly in relation to heat mitigation and walkability, with the goal being to expand the possibilities of automated design by using AI to compress research cycles and simulate environmental and programmatic outcomes before a single line was drawn.

HWKN's commercial masterplan in sharjah is the UAE's first AI-planned district
all images courtesy of HWKN

 

 

hwkn fuses cultural research with ai-generated prompts

 

District 11 aims to rethink the structure and social function of office neighborhoods in the Gulf, particularly by embedding AI into the conceptual framework of the masterplan itself. Matthias Hollwich, HWKN’s founding principal and known for his early research into aging and workplace design, describes the firm’s approach as a ‘reverse-engineering process,’ where AI is deployed to generate form based on desired social outcomes. Here, that means prioritizing collaboration, walkability, and thermal comfort, concepts that are less centered in commercial development, especially in the Gulf.

 

This AI-influenced methodology is also not HWKN’s first. The firm introduced the Work Resort concept in London’s Canada Water Dockside development, bringing together commercial workplace and hospitality logics. In Sharjah, that model is taken further and scaled to the level of an urban district, with the firm using AI across the entire project lifecycle: from environmental simulations to spatial programming and long-term adaptability. The intention is to use AI as a tool for reverse-engineering environments that prioritize collaboration, climate responsiveness, and walkability — criteria that is often secondary in conventional commercial planning.

HWKN's commercial masterplan in sharjah is the UAE's first AI-planned district
HWKN is set to develop the UAE’s first AI-planned masterplan in central Sharjah

 

 

eleven mixed-use buildings encircle a central boulevard

 

Commissioned by Al Marwan Real Estate Development Group, the masterplan is characterized by porous edges, shaded courtyards, and interconnected public spaces to integrate work, leisure, and wellness. Importantly, the project’s location between established residential zones and major cultural institutions, including the Sharjah Museum of Islamic Civilization and the Sharjah Art Museum, also positions it as a new connective tissue in the city’s fabric.

 

District 11’s buildings unfold around a canyon-like central boulevard that serves as the project’s spine, while side passages and courtyards weave in all of the various programs, reframing the conventional office district as a full-spectrum community aligned with the cultural values of the UAE. In particular, HWKN places emphasis on spatial variety and human comfort, integrating public areas shaded by architectural forms, and interiors designed to support flexibility, interaction, and well-being.

HWKN's commercial masterplan in sharjah is the UAE's first AI-planned district
District 11 features offices, cafés, childcare and healthcare facilities, and a mosque

HWKN-masterplan-AI-sharjah-designboom-01

the masterplan is characterized by porous edges, shaded courtyards, and interconnected public spaces

HWKN's commercial masterplan in sharjah is the UAE's first AI-planned district
each of the buildings has been shaped by AI-generated prompts informed by HWKN’s local research


the buildings unfold around a canyon-like central boulevard that serves as the project’s spine

 

 

project info:

 

name: District 11

architect: HWKN | @hwkn_architecture

location: Sharjah, UAE

The post HWKN’s commercial masterplan in sharjah is the UAE’s first AI-planned district appeared first on designboom | architecture & design magazine.

]]>