technology archives | designboom | architecture & design magazine https://www.designboom.com/technology/ designboom magazine | your first source for architecture, design & art news Fri, 13 Jun 2025 09:07:43 +0000 en-US hourly 1 infinite machine releases olto, an aluminum e-bike with magnetic foldable pedals https://www.designboom.com/technology/infinite-machine-olto-aluminum-e-bike-magnetic-foldable-pedals-06-13-2025/ Fri, 13 Jun 2025 09:50:00 +0000 https://www.designboom.com/?p=1138767 the scooter-like ride runs on an electric motor, but if the riders need an assist, they can swivel and start using the pedals.

The post infinite machine releases olto, an aluminum e-bike with magnetic foldable pedals appeared first on designboom | architecture & design magazine.

]]>
Infinite Machine’s Olto e-bike with aluminum body

 

Infinite Machine unveils Olto, an e-bike with an aluminum body, magnetic foldable pedals, and an automatic lock that prevents outsiders from stealing or moving it. From afar, it resembles a scooter because of its frame, but riders can take it on bike lanes with no issues. It runs on an electric motor, but if the riders need an assist, they can swivel and start using the pedals. Design-wise, it can accommodate two people while still carrying objects using the underseat handles. The footpegs, other than the pedals, are hidden and can be pulled out for the rear passenger.

 

Instead of the conventional softly rounded seat, the one on Infinite Machine’s Olto e-bike has a square shape, allowing for a wider space. Under the seat, the boxy stem hides the swappable battery. When riders need to change it, they pull up the seat, pull out the battery, and insert a newly charged one. At the front, the headlamps are positioned at the lower frame, just above the tire, to flash both high and low beams. Around Infinite Machine’s Olto e-bike are modular components, too. Aside from the optional accessories, riders can also attach a child carrier, a rear rack, a basket, and a center panel around and on the two-wheeler.

infinite machine olto e-bike
all images courtesy of Infinite Machine

 

 

Automatic steering and wheel lock to avoid thefts

 

A minimal design marks the modest style of Infinite Machine’s Olto e-bike. It’s clean to look at because, unlike conventional e-bikes, and even scooters, it only has two hues, and one of them is its aluminum material. There aren’t any designs splashed across the frame too, which helps with it looking stainless. The two-wheeler is also high-tech and theft-proof, as the company says. Riders of Infinite Machine’s Olto e-bike can connect it to the internet so they can track their vehicle via GPS using its app. There’s also a dedicated AirTag slot, so users can pair it up with Apple’s Find My Network. When parked, the ride locks the steering and wheels on its own.

 

In this case, it’s difficult for non-owners to move or steal it. If someone tries to, the e-bike goes off with an alarm sound, and the owners receive an instant notification on their smartphones. Riders can also use the app to monitor the mileage, battery percentage, and tampering alerts of the vehicle as well as to unlock and lock it. Specs-wise, Infinite Machine’s Olto e-bike has 40 miles of range, can run a top speed of 20 miles per hour on bike lanes, and cruise up to 33 miles per hour off-road. It comes with a dual-suspension frame and a 750-watt rear hub motor, while the battery is charged at 50 percent in an hour. The e-bike’s delivery begins in Fall 2025.

infinite machine olto e-bike
Infinite Machine unveils Olto, an e-bike with an aluminum body

infinite machine olto e-bike
the two-wheeler comes with an automatic lock that prevents outsiders from stealing or moving it

infinite machine olto e-bike
the front headlamps are just above the wheel

if the riders need an assist, they can swivel and start using the pedals
if the riders need an assist, they can swivel and start using the pedals

Instead of the conventional softly rounded seat, the one on the e-bike has a square shape
Instead of the conventional softly rounded seat, the one on the e-bike has a square shape

infinite-machine-olto-e-bike-aluminum-magnetic-foldable-pedals-designboom-ban

the delivery of the vehicle starts in Fall 2025

 

project info:

 

name: Olto

company: Infinite Machine | @infinitemachine

The post infinite machine releases olto, an aluminum e-bike with magnetic foldable pedals appeared first on designboom | architecture & design magazine.

]]>
poetry camera writes and prints poems about the people and objects it captures https://www.designboom.com/technology/poetry-camera-writes-prints-poems-captures-ai-claude-06-12-2025/ Thu, 12 Jun 2025 09:50:55 +0000 https://www.designboom.com/?p=1138553 reminiscent of an instant camera, there’s a slit below the lens where receipt-like paper prints out the poems.

The post poetry camera writes and prints poems about the people and objects it captures appeared first on designboom | architecture & design magazine.

]]>
Poetry Camera writes poems on its own using AI

 

Poetry Camera makes and prints poems about the people, objects, and surroundings it photographs using AI. Reminiscent of an instant camera, there’s a large, protruding camera lens on the boxy frame, which scans the subject. Below it, there’s a slit where the receipt-like paper prints out the poems that the AI Poetry Camera digitally pens. The design looks comical with the sizeable shutter button and viewfinder, but there’s also a nostalgic sense about it, knowing that it gives the user stanzas on a piece of paper rather than projecting them on a screen.

 

The AI language model that the Poetry Camera uses is from Anthropic named Claude 4. It’s the reason the device can write poems almost instantly using literary language. The user can choose the type of AI-generated poem they want, from haiku, sonnet, and limerick to alliteration and free verse, using the built-in knob. So far, the images and poems aren’t stored digitally on the Poetry Camera, meaning that the only copy the user has is the printed receipt.

poetry camera poems AI
all images courtesy of Poetry Camera | photo by Kaylee Pugliese/RISD

 

 

Device needs wifi connection to work

 

Kelin Zhang and Ryan Mather, the masterminds behind the Poetry Camera, have a ‘microfactory’ in New York where they assemble the device by hand. It’s a small team because they want to individually piece the parts together to add their personal touch to the end result. The frame is made of vacuum-cast plastic housings, and the device runs on a Raspberry Pi Zero 2 W with a Raspberry Pi Camera Module 3. There’s a catch, however. The device can’t work without a WiFi connection. It relies on it to make the language model work and start churning out AI-made poems.

 

The team says that the Poetry Camera that writes poems on its own doesn’t train the AI model it uses. About Anthropic’s Claude 4, they also add that they ‘care to pick reputable AI model providers that do not train on your data.’ For those who are crafty, they can build the device on their own, as Kelin Zhang and Ryan Mather make it open-source. In case the user isn’t so handy, they can just order the Poetry Camera, start taking pictures, and wait for the device to write and print the poems on a piece of receipt-like paper.

poetry camera poems AI
there’s a slit below the lens where the receipt-like paper prints out the poems

poetry camera poems AI
sample AI-generated poem by the device

poetry camera poems AI
view of the redesigned camera with a knob that lets users choose the type of poem they want

view of the poem printed on a receipt-like paper | photo by Sam McAllister / Anthropic
view of the poem printed on a receipt-like paper | photo by Sam McAllister / Anthropic

first test assembly with the new main board
first test assembly with the new mainboard

poetry-camera-write-print-poems-designboom-ban

the team assembles the parts by hand in New York

 

project info:

 

name: Poetry Camera | @poetry.camera

design: Kelin Zhang, Ryan Mather

research: Anthropic | @anthropicai

language model: Claude 4

files: here

The post poetry camera writes and prints poems about the people and objects it captures appeared first on designboom | architecture & design magazine.

]]>
aston martin brings out valkyrie LM, a hypercar with a built-in screen on the steering wheel https://www.designboom.com/technology/aston-martin-valkyrie-lm-screen-steering-wheel-racing-hypercar-06-11-2025/ Wed, 11 Jun 2025 20:00:07 +0000 https://www.designboom.com/?p=1138340 for this model, the design team has refreshed the engine so the hypercar can run on normal fuel instead of the ones used in racing.

The post aston martin brings out valkyrie LM, a hypercar with a built-in screen on the steering wheel appeared first on designboom | architecture & design magazine.

]]>
Aston Martin Valkyrie LM runs like a hypercar used in racing

 

Aston Martin releases the Valkyrie LM, a hypercar inspired by the road-legal version of the Valkyrie, in July 2016. Back then a non-working model, the LM variant is for the racing fans who want to own the vehicle and drive it on tracks. The current model is not road-legal and can’t also be used for official racing since the car manufacturer has designed it for private use only. Because of this, the Aston Martin Valkyrie LM doesn’t have the ballast, or the extra weight used to balance the car during races, and specific electronics installed to comply with the racing rules.

 

The hypercar’s controls are designed for personal driving, too. During racing, there usually is a system that adjusts the power delivered to the wheels. In the Aston Martin Valkyrie LM, it’s absent because the setup is more ‘direct,’ similar to how road-legal cars work, with their knobs and controls. The design team has also refreshed the engine so the hypercar can run on normal fuel instead of the ones used in racing. To still keep the spirit of racing, the team retains several designs. The first is the seven-speed gearboxes that use paddles behind the steering wheel, just like in track cars. The next is the adjustable suspension that adapts to different track conditions.

aston martin valkyrie LM
all images courtesy of Aston Martin

 

 

Built-in screen on the steering wheel

 

Then, the tires of the Aston Martin Valkyrie LM are still from Pirelli, which is the same company that makes the wheels for Formula 1 cars. Even the carbon-fiber seats come with padding around the shoulders and head, the same upholstery as the racing cars. The steering wheel comes with lighting and a built-in screen to show the user their driving information as if they were in a race. For a brief history recap, the car manufacturer started its Valkyrie project in 2016, producing the road-legal Valkyrie Coupe and the convertible Valkyrie Spider over time. Now, it is the time of the Valkyrie LM, which draws its design and engine cues from both its sibling models. 

 

The race car version of the Valkyrie Hypercar is competing in two major racing series this 2025: the FIA World Endurance Championship (WEC) and the IMSA WeatherTech SportsCar Championship in North America. Just like the Valkyrie that’s racing in the 2025 Le Mans, the LM version uses a modified version of the same V12 engine. It also doesn’t use a turbocharger but produces up to 520 kilowatts, following the racing rule limits. Aston Martin says that it will only produce ten Valkyrie LM units to be delivered in the second quarter of 2026.

aston martin valkyrie LM
Aston Martin releases the Valkyrie LM, a hypercar inspired by the road-legal version of the Valkyrie

aston martin valkyrie LM
the LM variant is for the racing fans who want to own the vehicle and drive it on tracks

aston martin valkyrie LM
this model is not road-legal and can’t also be used for official racing

aston martin valkyrie LM
the Aston Martin Valkyrie LM doesn’t have the ballast, or the extra weight used to balance the car during races

aston martin valkyrie LM
rear view of the hypercar

aston-martin-valkyrie-LM-racing-hypercar-road-legal-vehicle-designboom-ban2

the hypercar can run on normal fuel instead of the ones used in racing

the steering wheel comes with lighting and a built-in screen
the steering wheel comes with lighting and a built-in screen

the setup is more ‘direct,’ similar to how road-legal cars work, with their knobs and controls
the setup is more ‘direct,’ similar to how road-legal cars work, with their knobs and controls

the tires of the hypercar are still from Pirelli
the tires of the hypercar are still from Pirelli

aston-martin-valkyrie-LM-racing-hypercar-road-legal-vehicle-designboom-ban

so far, there are only ten units available for this model

 

project info:

 

name: Valkyrie LM

car manufacturer: Aston Martin | @astonmartin

The post aston martin brings out valkyrie LM, a hypercar with a built-in screen on the steering wheel appeared first on designboom | architecture & design magazine.

]]>
snap to release lightweight AR glasses that double as a wearable computer https://www.designboom.com/technology/snapchat-ar-glasses-lenses-visuals-sounds-book-reading-spectacles-05-29-2025/ Wed, 11 Jun 2025 09:30:43 +0000 https://www.designboom.com/?p=1135682 unveiled at the augmented world expo 2025, the immersive 'specs' is slated for a 2026 release.

The post snap to release lightweight AR glasses that double as a wearable computer appeared first on designboom | architecture & design magazine.

]]>
Snap AR glasses with see-through lenses

 

Snap Inc. has announced the release of its new lightweight AR glasses with see-through lenses that double as a wearable computer. Unveiled at the Augmented World Expo 2025, the immersive Specs is slated for a 2026 release with a slew of features as part of the upcoming Snap OS update. With the Snap AR glasses and their see-through lenses, users can translate 2D information into 3D floating objects before their eyes using the integrated language models, including OpenAI and Gemini. There’s also a real-time transcription for around 40 languages, and it can understand even the non-native accents with ‘high accuracy.’ For developers, they can generate 3D objects while they’re wearing the glasses and remotely monitor and manage multiple pairs of Specs.

 

In line with the recent announcement, the Snap Spectacles, which are a pair of AR glasses, also have lenses that generate the images and sounds of the book the user is reading. Named Augmented Reading Lenses, it’s a collaboration between the National Library Board of Singapore and Snap Inc., with LePub Singapore as the campaign’s production lead. These Snap AR glasses and lenses use real-time OCR, or the conversion of typed text into a digital format, and generative AI to produce the visuals. The device already has stereo speakers, so the soundscapes are a natural addition to the reading experience.

snapchat AR glasses lenses
all images courtesy of Snap Inc. as well as National Library Board of Singapore and LePub Singapore

 

 

Sounds play as the user reads the text

 

The Snap AR glasses and lenses use text recognition and machine learning to see the content the user is reading and activate the related visuals and sounds. First, the device scans the printed text as the user reads. Then, the images float before their eyes, accompanied by the sound effects linked to specific words or scenes. In this case, when the book describes a kind of environmental or action sound, like doors opening, the Snap AR glasses with these new lenses play that audio right into the speakers. 

 

So far, the company and the library say that the visuals appear in time for what the user is reading. Once they look up from the page, they can see the images depicted in the text in their field of vision. The National Library Board of Singapore adds that the project is a part of its initiative to use technology as a way to engage more people to read books. The teams have collaborated with LeGarage, the innovation branch of LePub Singapore, to help develop the reading experience and campaign of the Snap AR glasses and lenses. At the present time, they plan to roll out the beta-testing devices later in 2025 in Singapore to gather feedback before the public rollout.

 

The story was updated on June 11th, 2025, to include the announcement on the 2026 Specs AR glasses.

snapchat AR glasses lenses
Snap Inc. has announced the release of its new lightweight AR glasses that double as a wearable computer

the device has see-through lenses for sharper and clearer viewing
the device has see-through lenses for sharper and clearer viewing

snapchat AR glasses lenses
the device uses real-time OCR and generative AI to produce the visuals and sounds

 

snapchat AR glasses lenses
users can also interact with the floating imagery, based on what they’re reading

snapchat AR glasses lenses
sample visuals when the user reads Pride and Prejudice

snapchat-spectacles-AR-lenses-generate-visuals-sounds-book-lepub-singapore-designboom-ban2

even Frankenstein shows up as a generated visual

the device already has stereo speakers, so the soundscapes are present
the device already has stereo speakers, so the soundscapes are present

snapchat AR glasses lenses
the images and sounds appear as the user reads

users can see the images depicted in the text in their field of vision
users can see the images depicted in the text in their field of vision

snapchat-spectacles-AR-lenses-generate-visuals-sounds-book-lepub-singapore-designboom-ban

the beta testing rolls out later in 2025

 

project info:

 

name: Augmented Reading Lenses

companies: Snap Inc., Snap AR Studio, LePub Singapore | @spectacles, @lepub_worldwide

library: National Library Board of Singapore | @nlbsingapore

 

designboom has received this project from our DIY submissions feature, where we welcome our readers to submit their own work for publication. see more project submissions from our readers here.

 

edited by: matthew burgos | designboom

The post snap to release lightweight AR glasses that double as a wearable computer appeared first on designboom | architecture & design magazine.

]]>
1980s škoda favorit returns as electric car with ‘smiling’ face and animated headlights https://www.designboom.com/technology/1980s-skoda-favorit-electric-car-concept-06-10-2025/ Tue, 10 Jun 2025 00:30:59 +0000 https://www.designboom.com/?p=1137822 as part of the 'icons get a makeover' series, the concept model is one of the company's vehicles that goes through a modern refresh.

The post 1980s škoda favorit returns as electric car with ‘smiling’ face and animated headlights appeared first on designboom | architecture & design magazine.

]]>
Škoda Favorit comes back as a concept electric car

 

The Škoda Favorit from the 1980s makes a comeback as a concept electric car with a smiling front design and headlights that project different patterns. As part of the Icons Get a Makeover series, the concept model is one of the Škoda vehicles that goes through a refresh, one that retains the history while keeping all things modern. For a brief history, the Škoda Favorit hatchback came out in 1987, designed by Bertone. It was built for the streets and roads around the Eastern Bloc, a homage as well to the company’s headquarters in Mladá Boleslav, Czech Republic. 

 

Back then, it had a 1.3-liter engine and could run between 40 and 50 kW. It was in production up until 1994, when the Felicia model replaced it. Fast forward to 2025, Škoda taps designers Ljudmil Slavov and David Stingl to revive the Škoda Favorit, bringing it back as a concept and modern electric car. The design is a crossover between an SUV and a hatchback. It has a higher body structure, so there’s more space for the battery placement in the floor. Ljudmil Slavov says that he finds it challenging to simplify the already minimalistic shapes by Bertone. Instead of changing that, he evolves and expands them, making the overall frame and look modern rather than retro.

škoda favorit electric car
all images courtesy of Škoda, Ljudmil Slavov, David Stingl

 

 

Animated headlights with foldable covers

 

Škoda Favorit electric car’s face steers away a bit from the typical vehicle faces manufactured by the company. It’s an intention by the designer Ljudmil Slavov, adding that because of this, he’s able to maintain the Modern Solid style designed by Bertone for the original model from the 1980s. He has collaborated with his colleague David Stingl for the 3D sketches. His task is to give the Škoda Favorit electric car ‘volumetric proportions, shapes, and design elements so the result looks almost like a finished product with a Modern Solid expression,’ he says.

 

The result includes a shared handle for the doors, split in the middle so they open in different directions. There are also pairs of simpler wheels, with an embedded look and four sections resembling a fan. Then, the headlights. They reference the original model, but this time, the LED lights at the front and rear of the Škoda Favorit electric car can project different patterns, customized by the owner. They’re also framed by partially translucent covers, and their covers could be foldable, too. Because of these headlights, the face of the concept vehicle seems to smile. So far, the revived Škoda Favorit is a concept electric car. Aside from this, the design team has also conceived a racing concept vehicle inspired by the Favorit’s rally heritage and with prominent bumpers made of soft-touch material.

škoda favorit electric car
left: Modern Solid Favorit by Bertone; right: modern Favorit by Ljudmil Slavov and David Stingl

škoda favorit electric car
as part of the Icons Get a Makeover, the concept model is one of the Škoda vehicles that goes through a refresh

the design is a crossover between an SUV and a hatchback
the design is a crossover between an SUV and a hatchback

škoda favorit electric car
there’s a shared handle for the doors, split in the middle so they open in different directions

Ljudmil Slavov (left) and David Stingl (right9, and their Škoda Favorit design behind them
Ljudmil Slavov (left) and David Stingl (right9, and their Škoda Favorit design behind them

1987 original model of the vehicle, designed by Bertone
original model of the vehicle, designed by Bertone

1980s-škoda-favorit-electric-car-smiling-face-animated-headlights-designboom-ban

the Škoda Favorit hatchback came out in 1987

 

project info:

 

name: Favorit

company: Škoda | @skodagram

designers: Ljudmil Slavov, David Stingl | @lsd__esign, @davidstingl

 

designboom has received this project from our DIY submissions feature, where we welcome our readers to submit their own work for publication. see more project submissions from our readers here.

 

edited by: matthew burgos | designboom

The post 1980s škoda favorit returns as electric car with ‘smiling’ face and animated headlights appeared first on designboom | architecture & design magazine.

]]>
frank lloyd wright x airstream trailer brings usonian design on the road https://www.designboom.com/technology/frank-lloyd-wright-airstream-trailer-usonian-design-road-06-09-2025/ Mon, 09 Jun 2025 10:50:09 +0000 https://www.designboom.com/?p=1137853 Airstream and Frank Lloyd Wright Launch Limited Travel Trailer   Airstream partners with the Frank Lloyd Wright Foundation to release the Usonian Limited Edition Travel Trailer, a 28-foot mobile living space inspired by Wright’s architectural principles. Only 200 units are scheduled to be produced until 2027, each one merging Airstream’s traditional riveted aluminum shell with […]

The post frank lloyd wright x airstream trailer brings usonian design on the road appeared first on designboom | architecture & design magazine.

]]>
Airstream and Frank Lloyd Wright Launch Limited Travel Trailer

 

Airstream partners with the Frank Lloyd Wright Foundation to release the Usonian Limited Edition Travel Trailer, a 28-foot mobile living space inspired by Wright’s architectural principles. Only 200 units are scheduled to be produced until 2027, each one merging Airstream’s traditional riveted aluminum shell with design elements drawn from Wright’s Usonian vision—compact, efficient homes built with a strong connection to nature. Developed collaboratively between Airstream’s design team in Jackson Center, Ohio, and the Frank Lloyd Wright Foundation at Taliesin West in Arizona, the trailer combines mid-century modern aesthetics with adaptable interiors suited for travel, camping and everyday use.


all images courtesy of Airstream

 

 

Usonian Principles Applied to Mobile Living

 

Wright coined the term Usonian to describe a distinctly American architectural approach: modest, well-crafted homes with open layouts, built-in furniture, and a strong relationship to the surrounding environment. This design philosophy shaped the interior layout of the Usonian Limited Edition trailer, a collaboration between the Ohio-based team at Airstream and the Arizona foundation dedicated to Frank Lloyd Wright.

 

The result is a highly adaptable living space that reflects Wright’s principles and Airstream’s emphasis on efficient, mobile design. A rear sleeping area features twin beds that convert into a king-sized bed at the push of a button, with custom slipcovers and bolsters that transform the room into a daytime lounge. At the front of the trailer, a modular lounge serves as a dining area, workspace, or secondary sleeping area. Chairs and a stool collapse and stow within custom cabinetry, maximizing use of space without sacrificing comfort.


Airstream’s traditional aluminum shell meets design elements drawn from Wright’s Usonian vision

 

 

Design Details: Materials, Light, and Layout

 

The trailer interior emphasizes natural light and open views. A total of 29 windows, including two skylights and circular portholes, offer more glass surface than any other Airstream model to date. Overhead storage was reduced to make room for windows at sitting and standing height, reinforcing the indoor-outdoor connection central to Wright’s designs.

 

The color palette is based on Wright’s 1955 Martin-Senour Paint Collection, featuring earthy reds, mustard yellows, ochres, and turquoise tones drawn from the American desert. Floating shelves, open floor plans, and a custom slatted ceiling light fixture—inspired by a Taliesin West design—contribute to the trailer’s visual continuity and mid-century atmosphere.


only 200 units are scheduled to be produced until 2027

 

 

Historic Patterns and Limited Edition Features

 

Wright’s influence appears not only in the layout but also in the detailing. The Gordon Leaf Pattern, a geometric motif originally created by Wright associate Eugene Masselink, is used throughout the trailer—in lighting, cabinet panels, and doors. Each trailer is numbered and features special edition badging, including Wright’s signature Taliesin Red tile on the exterior. Interior furnishings also reflect Wright’s style, including deep, high-back lounge cushions and shelves designed for displaying books or travel keepsakes. USB charging ports are discreetly built into shelving for convenience.

 

The trailer has a GVWR of 7,600 lbs, making it towable by many full-size SUVs and trucks. It is available through Airstream dealers nationwide, with pricing starting at $184,900.


Wright’s influence appears not only in the layout but also in the detailing

frank-lloyd-wright-airstream-trailer-usonian-design-road-designboom-full-01

the unit features a total of 29 windows, including two skylights and circular portholes


the highly adaptable living space reflects Wright’s principles and Airstream’s emphasis mobile design


the trailer interior emphasizes natural light and open views


overhead storage was reduced to make room for windows


the color paletter includes earthy reds, mustard yellows, ochres, and turquoise tones


interior furnishings also reflect Wright’s style

 

project info: 

 

name: Usonian Limited Edition Travel Trailer
company: Airstream | @airstream_inc
in collaboration with: Frank Lloyd Wright Foundation | @wrighttaliesin

The post frank lloyd wright x airstream trailer brings usonian design on the road appeared first on designboom | architecture & design magazine.

]]>
sónar+D discusses quantum science in art, music by AI & future of creatives in series of talks https://www.designboom.com/technology/sonar-d-discusses-ai-music-art-geopolitics-series-talks-06-09-2025/ Mon, 09 Jun 2025 10:30:10 +0000 https://www.designboom.com/?p=1137423 part of the talk and forum programs happen on the mornings of june 12th and 13th, before the sónar 2025 festival opens to the public at 3pm.

The post sónar+D discusses quantum science in art, music by AI & future of creatives in series of talks appeared first on designboom | architecture & design magazine.

]]>
sónar+D 2025 talks about art, music and creative industries

 

Sónar+D addresses the use of quantum science in art, making music with AI, experimental video games in performances, and what the future looks like in the creative industries in a series of talks at Sónar 2025. The event runs from June 12th to 14th, 2025, at Fira Montjuïc in Barcelona, Spain, as part of the annual electronic music and digital art festival. On these days, over 100 lectures, exhibitions, workshops, and performances take place at once. designboom also hosts discussions during the festival, interviewing artists Yolanda Uriz, Dmitry Morozov aka ::vtol::, and George Moraitis on their practice and the making of their modern art, sound performances, and stage designs.

 

Yolanda Uriz uses physical phenomena, vibration, electromagnetic waves, and chemical molecules to decode sound, light, and smells in her installations and performances. This adoption of sound is also present in Dmitry Morozov aka ::vtol::’s robotics and installation, placing emphasis on the link between emergent systems and new kinds of technological synthesis. Even George Moraitis works with sound to narrate memory, experience, and a sense of history through sound art, audiovisual installations, and two-dimensional works and performance. Designboom’s talks take place on June 12th from 5:30pm. Historically, Sónar+D was established in 2013 as a platform for creatives to examine the ways technology, and now AI, influences art, music, and even society. This edition’s conferences focus on three main thematic areas: AI + Creativity, Futuring the Creative Industries, and Worlds to Come.

 

Meet us at Sónar+D – tickets here!

AI music art sónar+D
images courtesy of Sónar, unless stated otherwise | photo by Cecilia Diaz Betz

 

 

AI + Creativity explores the politics of the new technology

 

For the AI + Creativity during Sónar+D (tickets are available here), the section explores how creatives can use AI production, music, and audiovisual design. The talks also dive into the ethical and political aspects of artificial intelligence. They complement the other creative interviews in other sections, including designboom’s conversations with multi-sensory artist Yolanda Uriz, transdisciplinary artist Dmitry Morozov aka ::vtol::, and multimedia artist George Moraitis. The discussion starts with Introducing AI & Music powered by S+T+ARTS, a forum that leads the discussion of AI and sonic creativity. In another room, Libby Heaney performs Eat my Multiverse performance using quantum computing for visuals, sounds, and music development. Jordi Pons’ Artistic Trends, Music & AI discusses new musical genres and sonic structures from AI algorithms, while Rebecca Fiebrink hosts Design your dream music AI tool, a session on AI tool design accessible to users without programming knowledge. 

 

Joanne Armitage’s Automating Bodies: Power, Music and AI explores the power dynamics when users adopt AI for creative production. The talk includes examining gender bias in algorithmic music, too. It’s about machine learning and treating it as ‘resonant entities’ in Marije Baalman’s A Musical Understanding of AI as Resonance, while there’s also a masterclass on using real-time audio machine learning for culturally specific sound with Lamtharn (Hanoi) Hantrakul, known as ญาบอยฮานอย (yaboihanoi). The viewers can, or should participate in AI Performance Playground, an AI & Music Hacklab that allows visitors to use AI as an actual instrument. In this section, +RAIN Film Festival also shows films produced with AI models and AudioStellar’s Territorios sonoros emergentes demonstrates how motion tracking and AI can power dance for visual and sonic performances. At the same time, Maria Arna premieres Ama, a live musical performance using AI with the human voice.

AI music art sónar+D
Sónar+D addresses the impact of AI in music, art and more through a series of talks during Sónar 2025

 

 

Discussions on present and future of creative industries

 

Inside the Futuring the Creative Industries section, conversations spotlight the changes and opportunities within the creative sector amidst new technologies including, but not limited to, AI in music and art, cultural management, communication, advertising, experience design, and trend research. The ‘How to Future the Creative Industries’ forum features experts from institutions like the New Museum, HERVISIONS, Onassis Foundation, Serpentine Gallery, NewArt Foundation, LAS Foundation, Kapelica, gnration, and Tabakalera.

 

The session explores the role of cultural institutions in sharing new ideas and trends within a media-saturated environment. Trend analysts Berta Segura and Francesca Tur host ‘Hacking the World,’ which analyzes how marketing, geopolitics, technology, and digital culture transform creator profiles, audience formation, and artist-public interaction. The intersection of cultural heritage and digital technology is explored through ‘Lux Mundi,’ an audiovisual experience reinterpreting Romanesque fresco paintings. Artists Alba G. Corral, Massó, Desilence, and Hamill Industries collaborate with Tarta Relena for this Generalitat of Catalonia initiative.

AI music art sónar+D
the event runs from June 12th to 14th, 2025, at Fira Montjuïc in Barcelona | photo by Nerea Coll

 

 

Still inside the Futuring the Creative Industries section, creative collaboration and technology integration are also central. TIMES, a European network, presents ‘The Crossing’ with contributions from Margarida Mendes, Chris Watson & Izabella Dłużyk, and Saint Abdullah, Eomac & Rebecca Salvadori. Arts Korea Lab hosts ‘Future Thinking,’ where Korean creators like WOMAN OPEN TECH LAB, Earth-topia, Seungsoon Park, Hwia Kim, and Tae Eun Kim present their projects. AlphaTheta showcases its euphonia rotary mixer and virtual reality DJ suite. Music2.0 and JSPA explore the history of Japanese synthesizers.

 

MusicTech Europe, in collaboration with Barcelona Music Tech Hub, features the Music Tech Europe Academy startup presentations and ‘MusicTech Dialogues’ on data use in the creative economy. The event also includes interviews with artists and participants. Designboom interviews Yolanda Uriz, Dmitry Morozov aka ::vtol::, and George Moraitis in Lounge+D, and Time Out London also hosts live interviews. W1 Curates presents art and music collaborations on the screens of Stage+D, featuring artists such as Max Cooper and Goldie.

AI music art sónar+D
Actress & Suzanne Ciani present ‘Concrète Waves’ during Sónar by Day at Stage Complex+D

 

 

The last is the Worlds to Come, a thematic area that explores speculative futures and human-technology interfaces. It examines the relationship of today’s technology with culture and society. Quantum computing and non-binary perspectives are examined in Libby Heaney’s ‘Eat My Multiverse’, which uses quantum computing in an artistic context. This presentation focuses on re-evaluating current global conditions. Space exploration is a recurring theme. Xin Liu’s ‘Cosmic Metabolism’ discusses scientific and poetic elements of her work, including her personal genome exhibit, ‘A Book Of Mine’. The program also investigates human interaction with technology and environment. Albert.DATA’s ‘SYNAPTICON’ performance demonstrates real-time brain activity using brain-computer interfaces. 

 

Danielle Braithwaite-Shirley’s ‘WE CAN’T PRETEND ANYMORE’ offers an interactive digital narrative exploring the history of Black trans individuals. Tega Brain’s ‘Questions of Automation’ addresses digital sustainability through creative coding and DIY strategies, highlighting political and environmental concerns. Discussions extend to social innovation and community building. ‘Portals: Talks of Worlds to Come’, presented by The Social Hub, features a panel of experts discussing design, sustainability, and cultural innovation in shared spaces. The program also includes performances, such as Luis Garbán (Cardopusher) with ‘DESTRUCCIÓN’, an audiovisual project combining reggaeton, industrial, and breakcore. Each of these talks and forums contributes to the overall purpose of Sónar+D, which is to create a space for knowledge exchange between different professional fields. These programs coincide with the Sónar 2025 festival, which runs between June 12th and 14th.

AI music art sónar+D
Stage+D by MEDIAPRO, Playmodes, UPC-Telecos present Astres | photo by Nerea Coll

AI music art sónar+D
Lux Mundi installation by by Alba G.Corral, Massó, Desilence & Hamill Industries with Tarta Relena at Sónar+D

sónar+D-discussions-AI-music-art-geopolitics-talks-barcelona-designboom-ban

Sónar+D shows a replica of the apse of Sant Climent de Taüll to host Lux Mundi

Yolanda Uriz's Chemical Calls of Care
Yolanda Uriz’s Chemical Calls of Care | image courtesy of Yolanda Uriz

Chemical Calls of Care (2024), an interactive installation on audio-olfactory communication | image courtesy of Yolanda Uriz
Chemical Calls of Care (2024), an installation on audio-olfactory communication | image courtesy of Yolanda Uriz

sónar+D-discussions-AI-music-art-geopolitics-talks-barcelona-designboom-ban2

Edge is a kinetic, sound and light object | image courtesy of ::vtol::

iPot is a device for performing a digital tea ceremony | image courtesy of ::vtol::
iPot is a device for performing a digital tea ceremony | image courtesy of ::vtol::

Schematic by George Moraitis | image courtesy of George Moraitis
Schematic by George Moraitis | image courtesy of George Moraitis

sónar+D-discussions-AI-music-art-geopolitics-talks-barcelona-designboom-ban3

Xe by George Moraitis | image courtesy of George Moraitis

 

project info:

 

event: Sónar 2025 | @sonarfestival

program: Sónar+D

location: Palau de Congressos de Fira Montjuïc, Barcelona, Spain

dates: June 12th to 14th, 2025

photography: Cecilia Diaz Betz, Nerea Coll | @ceciliadiazbetz, @nereacoll

entry: tickets here

The post sónar+D discusses quantum science in art, music by AI & future of creatives in series of talks appeared first on designboom | architecture & design magazine.

]]>
virtual football game fooscade comes with mini boots for your fingers https://www.designboom.com/technology/fooscade-players-fingers-mini-boots-play-virtual-football-studio-hong-hua-06-08-2025/ Sun, 08 Jun 2025 01:01:30 +0000 https://www.designboom.com/?p=1137715 inspired by the arcade games, the device is a modern spin on the pong game and tabletop foosball play.

The post virtual football game fooscade comes with mini boots for your fingers appeared first on designboom | architecture & design magazine.

]]>
Virtual football with mini boots as controllers

 

Hong Hua and Yixuan Liu create Fooscade where players inject their fingers inside a pair of sliding mini boots before they play the virtual football game. Inspired by the arcade games, the device is a modern spin on the Pong game and tabletop foosball play. The main catch here is the custom-made controllers. Unlike the traditional table football where players twist the rods so the figures can kick the ball, the virtual football functions only when they slide the mini boots with their fingers. The field is also absent, replaced by a screen and a video game.

 

The game starts as soon as the players have their forefingers in the mini boots. These shoes have wires under them, connected to the screen in front of them. Because of these, the tiny footwear controls the digital football cleats. Just like in table football, players maneuver the ball and use rotating and sliding motions to beat their opponents. The goal is to score more goals than the other team in two and a half minutes. While the football is virtual, the mini boots make the concept tactile, keeping the spirit of the Pong game and traditional foosball play alive.

virtual football mini boots
all images courtesy of Hong Hua and Yixuan Liu

 

 

Fooscade revives soccer styles from early 2000s

 

Fooscade’s design is a homage to the soccer styles from the 1990s to the early 2000s. What’s new now is the series of geometric patterns that the game projects. It also features bright colors like red and purple. Even the style chosen by the team is reminiscent of the jerseys from the previous eras. Because the football is virtual, it’s easier for Hong Hua and Yixuan Liu to add more mechanics, animations, and visual cues compared to the traditional, manual ones. 

 

The interface is also pared back and easy to understand, so all players easily know what to do. The technology behind Fooscade is based on a direct interaction and wired model, meaning there’s a ‘fast’ response between the movement of the player and what they see on the screen. Fooscade handles its own game state tracking. There’s a timer on board, running for two and a half minutes per play, so the players don’t need to time their games. The design of the custom-made controllers is also open, so viewers can see the wiring and mechanics inside as the players move them. So far, the team has brought the virtual football with mini boots to a few conferences.

virtual football mini boots
Hong Hua and Yixuan Liu create Fooscade where players put their fingers inside a pair of sliding mini boots

tabletop game mini boots
the tiny footwear controls the digital football cleats

virtual football mini boots
the design of the custom-made controllers is also open, so viewers can see the wiring and mechanics inside

the device is a modern spin on the Pong game and tabletop foosball play
the device is a modern spin on the Pong game and tabletop foosball play

users can even play with two fingers from one hand
users can even play with two fingers from one hand

FOOSCADE-players-fingers-mini-boots-digital-tabletop-game-studio-hong-hua-designboom-ban

the game starts as soon as the players have their forefingers in the mini boots

 

project info:

 

name: Fooscade

collaboration: Hong Hua, Yixuan Liu | @studiohuahong

The post virtual football game fooscade comes with mini boots for your fingers appeared first on designboom | architecture & design magazine.

]]>
can an interactive game provide drug-free pain relief? researchers think so https://www.designboom.com/technology/can-an-interactive-game-drug-free-pain-reliever-researchers-painwaive-06-06-2025/ Fri, 06 Jun 2025 09:50:19 +0000 https://www.designboom.com/?p=1137591 named painwave, the video game aims to alleviate people’s chronic and nerve pain using neurofeedback system.

The post can an interactive game provide drug-free pain relief? researchers think so appeared first on designboom | architecture & design magazine.

]]>
meet painwaive, an Interactive game that doubles as pain reliever

 

Researchers at the University of New South Wales in Sydney, Australia, have run their first trial of an interactive game that trains people to change their brain waves and relieve their pain without using any drugs. Named PainWave, the video game aims to alleviate people’s chronic and nerve pain. It’s a neurofeedback system, which means it uses people’s brain’s activity to help them learn to control it. It is basically a game where their brain is the controller and they are the player at the same time. The system has two main parts: a 3D printed headset and the interactive game that acts as a pain reliever, played on a tablet.

 

The 3D printed headset picks up the user’s electrical signals called brainwaves. The device then sends the brainwave information to the app as the user plays, and the app translates the data into visuals researchers or personnel can see on the tablet. As an example, the water in the under-the-sea interactive game slash drug-free pain reliever can change color when the user starts to feel calm. In this way, researchers can see how people’s brain activity changes as they play. With these changes, the researchers find out that the brain can produce certain patterns that, over time, alter the brain activity and help them feel less pain without using any drugs.

interactive game pain reliever
all images courtesy of University of New South Wales | photo by Elva Darnell

 

 

3D printed headset uses water-based system for signals

 

The first small study of PainWaive has shown the researchers – coming from the University of New South Wales, the University of Technology Sydney, Charles Sturt University, the University of South Australia, and the University of Washington – positive results. In their trial, four people used the system, and their pain levels were tracked for four weeks. Three out of the four participants reported a major decrease in their pain, especially towards the end of the treatment. The pain relief they experienced was similar to, or even better than, as the researchers describe, what some people get from strong pain medications like opioids. The research team could also keep an eye on how participants were doing remotely.

 

By using a 3D printer, the researchers are able to cut the production price of the wearable VR-like headset. They’ve also designed almost everything themselves, including the computer board inside the device. The headset uses a special water-based system to get clearer brain signals, specifically from the sensorimotor cortex. It is the part of the brain that handles movement and touch, and it’s involved in how people experience pain. Because it is 3D printed, the headset is also lightweight, so it’s comfortable for the users to wear them for a long time. So far, the researchers are preparing for a larger study with 224 people who have nerve pain due to spinal cord injuries. Their next goal is to bring the interactive game closer to becoming a widely available option as a drug-free pain reliever. 

interactive game pain reliever
the 3D printed headset picks up the user’s electrical signals

interactive game pain reliever
the interactive game trains people to change their brain waves to relieve their pain without using any drugs

the game has an underwater-based setting
the game has an underwater-based setting

previously, the researchers have already developed another game-based research under Project Avatar
previously, the researchers have already developed another game-based research under Project Avatar

in Project Avatar, a simulated game aims to treat pain from spinal cord injury
in Project Avatar, a simulated game aims to treat pain from spinal cord injury

 

video showcasing Project Avatar

 

painwaive-interactive-game-drug-free-pain-relief-researchers-UNSW-designboom-ban

next, the researchers plan to conduct a study involving 224 people

 

project info:

 

name: PainWaive

institutions: University of New South Wales, University of Technology Sydney, Charles Sturt University, University of South Australia, University of Washington | @unsw, @utsengage, @charlessturtuni, @universitysa, @uofwa

researchers: Negin Hesam-Shariati, Lara Alexander, Fiona Stapleton, Toby Newton-John, Chin-Teng Lin, Pauline Zahara, Kevin Yi Chen, Sebastian Restrepo, Ian W. Skinner, James H. McAuley, G. Lorimer Moseley, Mark P. Jensen, Sylvia M. Gustin

study: here

The post can an interactive game provide drug-free pain relief? researchers think so appeared first on designboom | architecture & design magazine.

]]>
why AI language models like chatGPT and gemini can’t understand flowers like humans do https://www.designboom.com/technology/ai-language-models-chatgpt-gemini-understand-flowers-ohio-state-university-06-04-2025/ Wed, 04 Jun 2025 21:50:13 +0000 https://www.designboom.com/?p=1137193 this study suggests that large language models cannot represent human concepts, where senses of actions are involved, without experiencing the world through the body.

The post why AI language models like chatGPT and gemini can’t understand flowers like humans do appeared first on designboom | architecture & design magazine.

]]>
ohio state university researchers consider capacity of ai models

 

Imagine learning the concept of a flower without ever smelling a rose or brushing your fingers across its petals. We might be able to form a mental image or describe its characteristics, would we still truly understand its concept? This is the essential question tackled in a recent study by The Ohio State University which investigates whether large language models like ChatGPT and Gemini can represent human concepts without experiencing the world through the body. The answer, according to the Ohio researchers and collaborating institutions, is that this isn’t entirely possible.

 

The findings suggest that even the most advanced AI tools still lack the sensorimotor grounding that gives human concepts their richness. While large language models are remarkably good at identifying patterns, categories, and relationships in language, often outperforming humans in strictly verbal or statistical tasks, the study reveals a consistent shortfall when it comes to concepts rooted in sensorimotor experience. And so, when a concept involves senses like smell or touch, or bodily actions like holding, moving, or interacting, it seems that language alone isn’t enough.

why AI language models like chatGPT and gemini can’t understand flowers like humans do
all images courtesy of Pavel Danilyuk via Pexels | @rocketmann_team

 

 

chatgpt & gemini might not fully grasp the concept of a flower

 

The researchers at The Ohio State University tested four major AI models — GPT-3.5, GPT-4, PaLM, and Gemini — on a dataset of over 4,400 words that humans had previously rated along different conceptual dimensions. These dimensions ranged from abstract qualities like ‘imageability’ and ‘emotional arousal,’ to more grounded ones like how much a concept is experienced through the senses or through movement.

 

Words like ‘flower’, ‘hoof’, ‘swing’, or ‘humorous’ were then scored by both humans and AI models for how well they aligned with each dimension. While large language models showed strong alignment in non-sensorial categories such as imageability or valence, their performance dropped significantly when sensory or motor qualities were involved. A flower might be recognized as something visual, for instance, but the AI struggled to fully represent the integrated physical experiences that most people naturally associate with it. ‘A large language model can’t smell a rose, touch the petals of a daisy, or walk through a field of wildflowers,’ says Qihui Xu, lead author of the study. ‘They obtain what they know by consuming vast amounts of text — orders of magnitude larger than what a human is exposed to in their entire lifetimes — and still can’t quite capture some concepts the way humans do.’

why AI language models like chatGPT and gemini can’t understand flowers like humans do
investigating whether large language models like ChatGPT and Gemini can accurately represent human concepts

 

 

the role of the senses and bodily experience in thought

 

The study, recently published in Nature Human Behaviour, taps into a long-ongoing cognitive science debate which questions whether we can form concepts without grounding them in bodily experience. Some theories suggest that humans, particularly those with sensory impairments, can build rich conceptual frameworks using language alone. But others argue that physical interaction with the world is inseparable from how we understand it. A flower in this context is perceived beyond its form as an object. It is a set of sensory triggers and embodied memories, for instance the sensation of sunlight on your skin or the moment of stopping to sniff a bloom, which comes with emotional associations with gardens, gifts, grief, or celebration. These are multimodal, multisensory experiences, and this is something current language models like Chat GPT and Gemini, trained mostly on internet text, can only approximate.

 

Speaking to their capacity, however, in one part of the study shows that AI models accurately linked roses and pasta as both being ‘high in smell.’ But humans are unlikely to think of them as conceptually similar because we don’t just compare objects by single attributes, but we make use of a multidimensional web of experiences that includes how things feel, what we do with them, and what they mean to us.


the study by The Ohio State University suggests that these AI models cannot understand sensorial human experiences

 

 

the future of large language models and embodied ai

 

Interestingly, the study also found that models trained on both text and images performed better in certain sensory categories, particularly in dimensions related to vision. This hints at future scenarios in which multimodal training (combining text, visuals, and eventually sensor data) might help AI models get closer to human-like understanding. Still, the researchers are cautious. As Qihui Xu notes, even with image data, AI lacks the ‘doing part, which consists of how concepts are formed through action and interaction.

 

Integrating robotics, sensor technology, and embodied interaction could eventually move AI toward this kind of situated understanding. But for now, the human experience remains far richer than what language models — no matter how large or advanced — can replicate.


in one part of the study AI models accurately linked roses and pasta as both being ‘high in smell’

 

project info:

 

language models: Gemini, ChatGPT

companies: Google, OpenAI | @google, @openai

photography: Pavel Danilyuk | @rocketmann_team

The post why AI language models like chatGPT and gemini can’t understand flowers like humans do appeared first on designboom | architecture & design magazine.

]]>