It is essential to consider AI impact on language and culture: American scientist
TEHRAN- Steven T. Piantadosi, a professor at UC Berkeley in psychology and neuroscience, says as artificial intelligence becomes more prevalent in our lives, it is essential to consider its potential impact on language and culture and ensure that it is developed in a way that promotes diversity and inclusivity.
The relationship between AI and language is a vital area of study, he told the Tehran Times in an exclusive interview.
“Large language models have shown recently that statistical learning can acquire all kinds of different structures, including probably those needed for syntax and semantics.
AI technologies are currently being used to develop natural language processing systems, speech recognition software, and translation tools.”
Although these advancements have the potential to revolutionize communication and interaction, there are concerns about the role of AI in shaping language and culture, he added.
Linguist Daniel Everett argues that language and culture exist in a symbiotic relationship, with each one shaping and affecting the other.
In this context, there are two distinct perspectives regarding the role of AI in language acquisition and understanding. Noam Chomsky, a renowned linguist, is skeptical of AI's ability to replicate human language and thought.
He argues that language is an innate capability of humans and that recent advancements in AI have no bearing on matters concerning language, thought, learning, or cognition.
On the other hand, Everett challenges Chomsky's argument about "innate principles of language," citing ChatGPT as an example of how a language can be learned without any hard-wired principles of grammar.
Everett advocates for a revision of current theories of language learning to consider semiotics and inferential reasoning.
Overall, while AI and ML have the potential to bring significant advancements and benefits in various domains, their development and use require careful consideration of potential social, ethical, and cultural implications.
Ongoing debates and critiques underscore the importance of research and dialogue, and it is crucial to prioritize responsible development to ensure that these technologies are used to improve society and benefit all individuals.
Following is the text of the interview:
Q: Let's begin by discussing the origins of language and its implications for language acquisition. When Chomsky and Everett discuss the origins of language, they present different perspectives. Inspired by Kant, Chomsky argues that language is an innate capability of the human brain, while Everett challenges this view with anthropological reasons, saying that language is more than just grammar and has social and cultural origins. Is there any evidence to support how humankind developed this ability while other primates did not? Some believe that religion posits it as a divine gift from God, while scholars like Chomsky and Ray Jackendoff believe in mutation, and Everett highlights social interaction.
A: I think the origins of language are one of the big outstanding questions in science. We are certainly genetically different from even nearby primates like chimpanzees in a way that allows us and not them to acquire and use language, but it is unclear what the exact difference is.
It could be very shallow, such as the difference in the overall ability to process and remember information, or it could be very deep, like innate biases towards certain kinds of rules that are required for language.
I tend to think it's more likely that there are general cognitive differences beyond language, because there are so many cognitive things that people do that other species seem incapable of, beyond language.
Q: How do probabilistic models contribute to our understanding of language acquisition and processing, and how can this knowledge be applied in other areas of cognitive science? Additionally, what are the biggest challenges currently faced in the study of language evolution and cognition, and how does your research plan address them?
A: For a long time, many in linguistics, particularly those who worked in Chomsky's tradition, argued that it was logically impossible for statistical learning to acquire the core structures of language.
What large language models have shown recently is that isn't true -- that statistical learning can acquire all kinds of different structures, including probably those needed for syntax and semantics.
The question now is how much data it takes, because these statistical learning models currently take much more data than children.
Q: How do you see the relationship between language structure and cultural evolution?
A: I think that the language and culture we have is the outcome of a long process of cultural evolution. The knowledge and representations in these models aren’t created that way, although we can probably think of it as some fuzzy, coarse approximation to our own culture and language.
I think it's possible that cultural evolution has been one of the key features that enabled human intelligence, but it's not clear whether it will be important for these learning models.
Q: What are the implications of large language models for genuine language theories, including representations of structure and connections to human learning, in large language models? How do these models relate to other approaches in linguistics, and what is their ability to answer "why" questions about the nature of language?
A: I argue, like many who worked on these models over the past few decades, that these models provide answers to "why" questions because they show how the required knowledge and representations can emerge from certain kinds of learning architectures.
Others try to answer "why" questions by positing specific innate representations, but I think that the research program has not been very productive in terms of explaining or predicting the diversity of different forms that language can take.
Q: Is AI a danger to humanity or just a threat to the interests of some people?
A: I think these models are potentially a threat, like any new technology, and so it's worth taking these kinds of concerns seriously.
I suspect that the main threat won't be a threat to humanity, but more of a disruption to things like tests and essays we use in teaching. There is already a clear threat to humanity that we should be taking seriously, and this kind of AI argument distracts from climate change.
Q: Large language models are especially vulnerable to abuse. What is your comment on ethical observations?
A: They are trained on the text on the internet and therefore have internalized all kinds of biases that you find on the internet.
That means that they underlyingly "believe" some horrible things, and companies have to put filters on them so that this information doesn't come out to users. But it's still there, inside.
To me, this means that you would not want to trust these models for anything.
Q: How can large language models be leveraged to improve language education for non-native speakers and individuals with language impairments?
A: I think there is a lot of potentials there, but I am not an expert in this. Certainly, the technology in text-to-speech and speech-to-text has improved markedly in recent years, and I think that tools for improved grammar checking, maybe incorporating some semantic knowledge of analysis, will be very useful to people. But I couldn't predict where it's headed.