( OS prefers reducing animation! )

Research Blog

Home > Research > CI Research Blog > How can AI enable the co-creation of novel, and potentially more sustainable fashion and textile design of the future?

How can AI enable the co-creation of novel, and potentially more sustainable fashion and textile design of the future?

This blogpost reports on a collaborative investigation into the potential of bridging AI and fashion. The project was supported by the Creative Informatics PhD RA funding, and was a collaboration between fashion technology company Away To Mars and researchers from University of Edinburgh and University of the Arts London.

The Business of Fashion Textiles and Technology (BFTT), is a Creative R&D Partnership led by University of the Arts London, and is one of 9 UK wide R&D partnerships as part of the £80m UK Creative Industry Cluster Programme (CICP), hosted by the Arts and Humanities Research Council, (AHRC) bringing together world-class research talent with Small and Medium Enterprises (SMEs) and leading companies and organisations from across the UK.

Away To Mars (ATM) is a fashion technology SME that aims to democratise fashion design and creative collaboration in this field, enabled by an online platform and real-time tools.  ATM was awarded R&D funding via the BFTT SME R&D Programme, a mechanism to support SMEs in the development of new methods to achieve innovation and ensure more sustainable approaches toward fashion and textile design.

ATM aimed to determine how AI may enable the co-creation of novel, and potentially more sustainable fashion and textile design of the future. The key aim being for the co-creation platform to ultimately facilitate both professional designers and consumers alike, to co-create with playful AI creative ‘sparkers’ as novel interactive fashion and textile design tools. The end result may for example be offered as unique ready-to-buy physical garments on the ATM retail platform.

With a PhD in the neuroscience of creativity, Dr Shama Rahman, was appointed as a Post-Doc researcher via the BFTT funding mechanism to lead the R&D, design and implementation of a bespoke AI system – working with ATM CEO Alfredo Orobio, academic mentor Lynne Craig, University of Edinburgh, (and previously London College of Fashion, UAL), and Professor Jane Harris. Following preliminary R&D, SME Viapontica specialising in AI, and based in the Bayes Centre, University of Edinburgh, was contracted to develop a working commercial prototype or minimum viable product (MVP).

A neuro-design approach was applied to the R&D, harnessing multiple state-of-the-art ‘visual AI’ techniques, to build a pioneering prototype co-creation platform for fashion and textile design (see Figure 1),  R&D included working with visual AI techniques, modular positioning and experimentation to explore ways in which AI can be used as a co-creation tool for fashion and textile design.

Figure 1: Database of visual themes informing the textile design of the garment

A key requirement of the R&D was to achieve an MVP that provides a ‘real-time’ user experience, i.e. the ability to have users interact with the AI in a ‘natural’ time frame. This required a level of advanced technical development challenging current expectation of the capability of human-computer-interaction for web interfaces. To create distinct and royalty free images, various fashion image datasets were created, ranging from original digitally produced 2D designs (see Figure 2), informing hybrid AI/Human composite images.

Figure 2: Variations of each image database thematic

The project facilitated a novel collaboration across the UKRI Creative Industry Cluster Programme (CICP), convening two Creative R&D Partnerships – the BFTT University of the Arts London, and Creative Informatics, University of Edinburgh. The project funding aligned to this collaboration also supported PhD candidate, Patricia Wu Wu, from the School of Design, Edinburgh College of Art. An interdisciplinary fashion designer and researcher, Patricia works at the nexus of computation, digital fabrication and remote-sensing to create data-driven designs in the form of 3D printed wearable forms, data visualisations, generative animation and moving image. As part of the R&D Patricia created a dataset of novel garment and textile concepts (see Figure 3) that focus on visual patterns. Agent-based simulation was used to generate high-resolution time-based morphologies driven by emergent, self-organised behaviour found in complex systems.

 

Figure 3: Final co-creation textile prints on three different garment shapes: dress, hoodie and t-shirt

The funding enabled the creation of visual designs to add to the dataset of royalty free images to support the AI R&D. The newly created images populated a completely novel dataset for training visual AI.  The datasets created include Away to Mars’ own collections, the aim being in the future to include and support collaborations with leading designer fashion brands enhancing their own collections and design datasets – such as the previously established collaboration with Missoni and Harvey Nichols.

The research also involved the generation of novel datasets emerging from distinct visual domains such as nature, landscape and space photography, paintings and illustrations.  The images were used to pre-train different combinations of visual AI algorithms and techniques, such that by the time they are to be deployed in the web interface, it would take minimal computational time to generate many distinct visual variations. A key requirement of monetising a real-time AI / human interface focused on the ability of users to select, manipulate and implement each stage of the co-creation process, rather than waiting hours or even days to receive the final design.

The AI techniques have mainly involved StyleGANs and StyleTransfers, experimenting with orders of ‘chaining’ and ‘early-stopping’ that would produce the most aesthetically pleasing visuals, which for the user, and AI creator are subjectively selected designs. The aim was that the user would be able to select and produce beautiful textile prints for example resulting from the AI’s ‘directed complexity’. Engineering this structure was a significant part of managing and working with datasets in this context.

Funding from the Creative Informatics PhD Research Assistant fund contributed to the R&D sought to focus on the question of, “How Data Driven Innovation can unlock hidden value in creative industries’ data sets?” and “Machine Learning, Artificial Intelligence and Automation in the Creative Industries”. Working with Patricia Wu Wu, the team produced two datasets consisting of 2.5k high resolution images (each 8192 x 8192 pixels). The work was centred around three themes as inspiration.

Short animations were rendered (each taking approximately 90 hours to produce). From these animations, still images were exported to create two datasets.  For the first dataset, illustrated in a greyscale aesthetic (see Figure 4), Patricia combined the first and third theme as a starting point, looking at geometric lines and simulating its movement to create a complex architectural shape. The animation showed the gradual transformation of the geometry and can also shift towards different types of transformation. The second dataset focused on the second theme (soil erosion) and was centred on visual organic patterns.

The team modified the outputs to create a variety of colour palettes (see Figure 5), adjusting contrast to ensure resulting images were distinct enough for the AI generation to achieve aesthetic results, and an overall brighter iridescence palette aligned to the Away to Mars design aesthetic.

Figure 4: Examples from dataset 1 on the theme of complex architectural geometry
Figure 5: Examples from dataset 2 on the theme of visual organic patterns inspired by soil erosion

The two new datasets were integrated with 8 larger datasets previously developed by Dr Rahman, titled, ‘Bio-geometries’, ‘Away to Space’, and ‘Textures’ adhering to coherence in form, colour and shape. This contribution added value to the overall datasets in terms of novelty, consistency, and high resolution, serving to enhance the size of the dataset required for the visual AI techniques to train and generate relevant results, also ensuring image right permissions in order to be able to implement the final designs. This project will be ongoing with a number of publishing and showcasing events via BFTT over 2022 /23.

The project has informed development of a novel concept, the ‘Ideas Economy,’ defined during the project delivery, which seeks to address ideas and values in the context of human/ AI collaboration across the creative industries.