12
November
2020
|
09:00
Europe/Amsterdam

AI as a tool for invention

We at Prosus have an approach to AI that builds on four pillars: (1) AI everywhere, (2) at scale, (3) by design, and (4) ethical and responsible. As we continue developing capabilities across the group and increase the number of models in production, we also dedicate significant resources to understanding how AI can be used above and beyond mainstream applications.

One of these is “AI as a tool for invention”. AI is common in many areas, from object detection, to language processing or task automation. AI as a tool for invention is different. It is about using algorithms and machine learning to assist the process of invention: the generation of new products, ideas or artefacts, or delegating the full invention process to machines. While this could appear far-fetched, we are seeing an increasing number of cases using AI as a tool for invention. It’s important that we always look for new ways to introduce this, as the application could further expand how we leverage AI across the group.

AI as an invention/discovery tool

Early on this year, Lancet published the results of an AI-derived hypothesis for the use of an existing drug (Baricitinib) to prevent the extreme immune system reactions to Covid-19. Further tests confirmed the effectiveness of the drug and clinical trials are ongoing. The “discovery” leverages a graph repository of structured medical information extracted from scientific literature by machine learning. The study then uses AI to search for approved drugs that might block the viral infection process, a needle-in-the-haystack problem.

In another recent study, in a completely different field, researchers used deep learning to rediscover the general formula that governs the motion of multiple masses connected by springs. The idea is to model complex physical interactions, for which we have only observations, with deep learning and then apply symbolic regression to derive the equations that governs the system behaviour. The study shows how known physical laws can be rediscovered, but also how some unknown equations can be revealed or discovered from data.

In yet another domain, AlphaGo managed to impress Go players by defeating the best player in the world by playing “creative” moves. Within the narrow scope of a game, AlphaGo was able to learn, plan and come up with moves that are competent and original.

Creativity and machines

Creativity is defined as the ability to come up with ideas/artefacts that are new, surprising and valuable. This ability is an aspect of human intelligence grounded in everyday skills, conceptual thinking, perception, memory or reflective self-criticism. The artefacts can be objects, products, services, drawings, models, strategies, narratives, art, music and many other. The attribute “new” can refer to new to someone, but it is most valuable if it is new to many or to everyone. The term surprising has to do with a frame of reference and can mean unexpected or apparently impossible. Finally, valuable has many dimensions, ranging from commercially useful to beautiful.

Anyone that works with software knows that it is easy for an algorithm to create new, surprising things, randomly. But are they valuable? This also depends on who is judging this. If a computer comes up with random combinations of notes, a human being could well detect some pleasant patterns. A gifted musician however might come away with a novel idea that sparks a new form of composition. One also needs to acknowledge that the majority of what professional artists and scientists do, what we would associate to creative work, is exploratory. Only a small part is truly transformational creativity. As Paul Valery once noted: it takes two to invent anything. One makes up combinations, and the other chooses.

Deep learning and generative AI

Deep Learning (DL) has many attributes that appear as potentially interesting for supporting the invention process. Traditional DL is applied to learn a map from complex, high-dimensional input data (an image, a sentence, a time series) to a specific output (content of the image, next word in a sentence, next value in the series). DL is particularly effective in learning the relationship between input and output from examples (supervised learning) and to perform classifications and predictions. For instance, given images of chairs, a DL model learns the relationship between the pixel configuration in the input image and the corresponding output so that new images of chairs can be classified correctly.

While this all works well, DL is not good at telling us what the hidden structure is: what patterns, proportions, clusters of data, correlations between data, causal relationships must be present to say that a collection of pixels is a chair. Technically, we would like to learn a very complicated probability distribution from a small set of high-dimensional points (the images of chairs) sampled from that distribution. If we knew the distribution, we could generate as many chair images as we like. But of course, since virtually anything can be described as a high-dimensional data point, this could be applied to anything. We could create drugs that are effective for a specific condition, poetry that people like, objects that work well for a given purpose, etc.

Clearly, this problem is intractable and there isn’t any viable solution, but there is a growing set of models that provide useful approximations. Unsupervised learning and a special group of DL architectures (variational autoencoders, encoder-decoder architectures, generative adversarial networks and transformers) are particularly interesting for this purpose.

As an example, Variational autoencoders (VAE) train an encoder to compress an input into a compact representation (called “code”), and a decoder to decompress the code into the original input. Intuitively, the intermediate representation (the code) must learn whatever essential there is in the input data to be able to regenerate it. If furthermore we force the compact representation to have a specific mathematical form (that ensures the representation space is continuous and smooth), we can generate outputs based on parameters. For instance, if we train a VAE to learn from images of birds, in all likelihood we can play with parameters and sample the “code” to generate something that is “bird like”. The closer we stay to the birds used for training, the more similar the generated birds will be. The further we move away from the training data, the more bizarre, strange or implausible the birds will be, with some very creative outcomes.

Still, birds have to respect some obvious constraints to be considered as such. In other domains, it is harder to pin down these constraints (what makes a startup viable? what are the design parameters of successful drugs?) and we frequently invent around known patterns.

Generative models may have less constraints and a broader search space than we have. They are designed to learn the hidden structure of data, and this can be exploited to create artefacts that can potentially be new, surprising and valuable.

Examples of generative AI

Take for instance the generation of human faces of people that do not exist. The authors of the system have extended, altered, and tweaked a model (a GAN in this case) to achieve remarkable fidelity in face generation. As a face generated this way does not belong to anyone, it is royalty and privacy neutral. Advertisement agencies have quickly exploited the opportunity to create synthetic characters or synthetic agents that engage with customers.

In addition to images and visual objects, there are several building blocks that can be used in applications. For instance, the generation of realistic speech from text or the generation of face from speech. One of the areas of most rapid development is language, with tools like GPT3 used in a wide range of language tasks, including generation of text from a human prompt. See the example below:

HUMAN PROMPT: John St. Clair Etouffee, world-renowned art expert, connoisseur, aesthete, art historian, and lecturer at the University of Oxford, is walking around the Louvre, savoring the exquisite artwork of all ages assembled before him. The Louvre is closed, allowing him free access to all the paintings and silence in which to meditate them. He stops in front of La Gioconda, Leonardo da Vinci’s world-famous Mona Lisa. After a time, he begins to speak and describe the qualia, the frisson, the nigh-indescribable experience of the subtle swirls & sfumato of the painting

MACHINE RESPONSE: The sun is now barely a foot or two below the horizon and with every second, the golden tone of the painting deepens. Everything melts in it. I, who am having this experience, appear to you in a little corner of the painting and I look at you through the painting and our eyes meet. Our arms, our shoulders, our heads, our hands, our necks, our mouths, our lips, our eyes, our fingers and the tips of our toes — all this transforms itself into bright, beauteous colour. Everything becomes alive, all there is left in me is love and appreciation. It makes me want to shout at you “Don’t leave!” but you don’t hear me and I remain silent. La Gioconda looks at me and I at La Gioconda, the same as before, but now it’s like being in two places at the same time and my feet have no feeling.”

It is natural to expect that artists have been quick at exploiting the opportunity. Generative AI has been used to create spectacular visuals and immersive installations, to create songs for musical contexts, to teach robots to improvise jazz, or as sparring partner for improvisation shows. There is a vibrant ecosystem of AI-based art, that blends the ability of AI to create something new and surprising with the sensibility of the artist to identify what’s valuable.

About Prosus

Prosus is a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities.

Read More

Media contacts

General enquiries
+ 27(0)21 406 2121
Press office
[email protected]
Investor relations
[email protected]

In more applied areas, AI has been used to support the design of furniture objects that have some specific features (light, resistant, use less material) or in architecture to explore plans and layouts of living spaces. Some progressive whiskey makers have used machine learning to generate whiskey recipes that meet taste and sales criteria by design. The recipes are reviewed and selected by human blenders before they are produced, and apparently this has led to remarkable commercial success!

Some of the most promising areas are drug discovery and material science, where the sheer number of possible combinations of molecules and compounds makes the hit rate of discovery very low and the process extremely expensive. For instance, predicting the 3D structure of protein from the genetic sequence is one of the oldest and grandest challenges in biology. Once the shape of a protein is understood, we can produce targeted drugs.

Another area where generative AI is widely used is the creation of synthetic data: data which is indistinguishable from real data given a specific use, yet synthetic. In finance, this is used to generate large amounts of credit card fraud data to better train fraud models, or to generate trading strategies to fine tune investment models, or to improve anomaly detection by learning the difference between regular and anomalous behavior. In healthcare, synthetic patient data is used to facilitate sharing data across medical institutions and for research purposes without compromising on patient privacy. It must be also said that generating useful synthetic data is not a trivial problem, and the applicability is so far limited to specific use cases.

And of course, an entire cottage industry of fakes has emerged as a side effect of these capabilities, with more and more sophisticated data, images and narratives being generated algorithmically, indistinguishable from real material.

Why is this important?

At present we see generative AI being explored and applied for some key reasons:

Accelerated discovery. This is mainly to tackle the needle-in-the-haystack problem, and to narrow down the search for discovery to promising options very early on in the discovery process. This is particularly promising for drug discovery.

Product design. The focus is on combinatorial-based innovation and on the generation of objects and product blueprints that explore combinations of design variables beyond human ability.

Anomaly detection. Broadly speaking this implies learning normal and anomaly behaviour from complex data. Once achieved, we can detect anomalies but also generate scenarios and simulations that represent unseen anomalies or fraud attempts.

Synthetic data. This serves to create data that does not have sensitivity, privacy, proprietary, or licensing constraints. Furthermore, this can be used to generate data for training purposes, and alleviate the need to collect real data.

But why is this important?

Think, for instance, at how we deal with anomaly detection. One thing is to design tools that detect anomalies well, something commonly done with rule-based systems, statistics and Machine Learning. Another is to generate, or “invent”, anomalies that we have never seen and that could emerge in the future. Similarly, one thing is to support drug development by helping predict drug efficacy, another is to explore the space of candidate drugs to generate options for more rapid design and test.

While we are far from the availability of tools that replace human creativity, we do have available a range of tools that can support the process of innovation and invention. We see these tools as promising to support the design of new products, services or business models through the generation of new, surprising options. It is a nascent field, but one worth paying attention to for its potentially profound implications.