On August 29, 2001, the massive planetary radar in Evpatoria, Crimea, was pointed toward a sun-like star 69 light years from Earth. Over the course of 15 minutes, the telescope transmitted the world’s first extraterrestrial concert into space. The so-called ‘Teenage Message’ consisted of renditions of classical and Russian folk music played on the theremin, an electronic instrument best known for horror movie sound effects, but whose properties also make it an ideal instrument for broadcasting into space.

Less than two years after the Teenage Message was transmitted into space, Alexander Zaitsev, a Russian radio astronomer and the director of the project, made history again. In 2003, he oversaw the transmission of the Cosmic Call messages from the Evpatoria radar, which sent a scientific message to nine nearby stars. Notably, the message also included the source code for a chatbot called astrobot Ella. Although Ella is far less sophisticated than Siri and the other chatbots we all carry around in our pockets today, at the time it was considered a cutting-edge example of machine learning. Ella could respond to natural language inputs, crack jokes, play blackjack, and tell fortunes – not a bad repertoire for our first interstellar AI ambassador.

Zaitsev may not have realized it at the time, but his pioneering experiments in interstellar communication laid the foundation for extraterrestrial messages for decades to come. The Teenage Message recital and transmission of astrobot Ella were watershed moments in the history of messaging extraterrestrial intelligence (METI) that demonstrated the strengths of music and artificial intelligence as mediums for interstellar communication, which have come to define the future of extraterrestrial messages.

The dream of using music to communicate with an extraterrestrial intelligence across the vast expanses of time and space has haunted the human imagination for centuries. In one of the earliest known examples of science fiction, Man in the Moone, the 17th century bishop Frances Godwin tells the story of a man carried to the moon by geese where he encounters an extraterrestrial civilization that speaks not in words, but in “tunes and uncouth sounds”. Ironically, Godwin frames this musical language as a barrier to communication rather than a common basis to facilitate understanding.

Today, music has come to occupy a primary place in METI efforts precisely because scientists and artists believe it may be an ideal medium for communicating information about humans and life on Earth. The two most recent interstellar messages to be transmitted into space in 2017 and 2018 were primarily musical in nature and mark the first steps toward ‘extraterrestrial music’. Indeed, the Sónar messages (named after the Spanish music festival that collaborated on the project) marked a radical departure from previous METI efforts in several key respects. Not only were they the first to target a known exoplanet in the habitable zone of its host star, they were also the first to include musical selections specifically written for an extraterrestrial audience.

All previous interstellar messages that have included a musical component have repurposed music that was written for human ears only. These songs were included in an attempt to convey human aesthetic sensibilities, which would presumably be of interest to an extraterrestrial intelligence. But if the purpose of an interstellar message is to maximize the amount of information conveyed about life on Earth in a manner that will be intelligible to a non-human intelligence, selecting pre-existing songs for transmissions is insufficient. Not only are they burdened with a strong selection bias (ie which songs should we send and who gets to decide?) but they may also be impoverished from an informational perspective.

Aside from providing insight into human culture, one of the main benefits of transmitting music to extraterrestrials is that it can teach the recipient a lot about our physiology and cognition. For example, the human hearing range only extends from about 20 Hz to 20,000 Hz and there is a threshold at which we can no longer differentiate between two frequencies (although just what this threshold is remains the subject of active debate). So a hallmark of extraterrestrial music would be songs that are designed to educate ETs about our musical perception by using, say, microtonal arrangements that cover the full frequency spectrum. Some progress is being made in this direction by the SETI Institute, whose Earthling project is crowdsourcing audio from around the world and rolling it into a unified piece of music. Although the Earthling project is not intended for broadcast, it will provide invaluable insight into how to compose music for extraterrestrial ears.

In addition to music, the interstellar messages of the future are likely to have a strong AI component. As Carl Sagan and many others have noted, the relatively young age of Earth and the human species suggests that any extraterrestrial civilization we contact will almost certainly be more technologically advanced than our own. We cannot say, of course, whether their technological progress will have proceeded in the same way as our own, but we do assume that they share some similarities, such as the ability to encode information in radio waves and other forms of electromagnetic radiation.

On Earth, the past few decades have seen remarkable advances in a narrow form of artificial intelligence known as machine learning, which allows computers to discover meaningful patterns by analyzing massive data sets. If extraterrestrials are assumed to be more technologically sophisticated than humans, we might expect them to possess advanced artificial intelligence; indeed, the extraterrestrials themselves may be more silicon than flesh. In this case, it would make sense to design future interstellar messages with an eye toward machine intelligence.

There are two main approaches here. The first is to transmit the code for a machine learning program that could be operated by an extraterrestrial intelligence, as in the case of astrobot Ella. In principle this would allow an ET to have an interactive experience with a representative of planet Earth, but it would also require the recipient to learn our coding languages to run the program, which is a massive undertaking on its own. A more fruitful approach might be to transmit a massive natural language dataset, also known as a corpus, that ET can use to train its own AI and learn about life on Earth.

Some researchers have already begun theoretical work on the design of an interstellar corpus. For example, it has been demonstrated that the minimum size of a natural language corpus transmitted to ET would have to be 20,000 words – about 20 times the length of this essay – to distinguish it as linguistic, rather than random noise. But a lot of important questions still need to be answered, such as what the content of the corpus should say and how to endow the text with meaning.

The latter problem will be particularly pernicious since a machine learning algorithm doesn’t really ‘understand’ the meaning of the data it processes, only the patterns between the elements in the data. In short, we’ll need to design a metalanguage that explains the meanings of the words in a natural language corpus once ET has teased out the grammar of the text. To date, there has only been a single metalanguage designed for this purpose. Known as Lincos, it was created by the Dutch astrophysicist Alexander Ollongren and based on the lambda calculus and the calculus of constructions, two forms of higher order logic. But Lincos has yet to be sent into space and a number of conceptual issues with the language must be addressed before it is transmitted to ET.

It’s fitting that the future of interstellar communication will be defined by two fields as drastically different as music and machine learning. Emotion and logic are the yin and yang of the human experience and any interstellar message should strive to capture both aspects of our species. Although music is often seen as a manifestation of pure emotion, its composition is also deeply logical; likewise, the unfeeling logic of a machine can be used to interpret the language of poetry. Sending music and intelligent machines to the stars can’t capture the boundless possibilities of life on Earth, but it’s a good place to start. All we can hope is that there’s someone listening.

Daniel Oberhaus

Daniel Oberhaus is a staff writer at Wired magazine and the author of Extraterrestrial Languages (MIT Press, 2019).

The future of interstellar communication

© Vollebak 2020

Founded in 2016, Vollebak uses science and technology to make the future of clothing happen faster. In our first four years we’ve made the world’s first Graphene Jacket using the only material in the world with a Nobel Prize, released 100 Year clothing designed to outlive you, created a Plant and Algae T Shirt grown in forests and bioreactors that turns into worm food, and designed the first jacket for deep space travel. You can find out more about us at vollebak.com.

Sign up to see the future first: