Ethical issues surrounding AI have been taking centre stage lately and now AI pioneer OpenAI faces internal strife and external criticism over its practices and technology risks.
In May, several high-profile employees, including Jan Leike, left due to disagreements over safety measures and prioritizing product releases.
Leike’s exit triggered accusations of psychological abuse against CEO Sam Altman from former board members. External critics warn of advanced AI’s existential threats, job displacement, and misuse for misinformation. In response, employees from OpenAI, Anthropic, DeepMind, and other AI companies wrote an open letter addressing these risks.
Originally signed by 13 employees, the letter outlines four core demands to protect whistleblowers and promote greater transparency and accountability in AI development. It demands that companies refrain from enforcing non-disparagement clauses or retaliating against employees who raise risk-related concerns.
Additionally, the letter calls for companies to create a verifiably anonymous process for employees to report concerns to boards, regulators, and independent experts.
It also emphasizes the need for companies to foster a culture of open criticism, allowing employees to publicly share risk-related concerns while protecting trade secrets. Furthermore, the letter insists that companies must not retaliate against employees who share confidential risk-related information when other processes have failed.
Reports indicate that OpenAI has forced departing employees to sign non-disclosure agreements, preventing them from criticizing the company or risk losing their vested equity. OpenAI CEO Sam Altman admitted feeling “embarrassed” by the situation but claimed the company had never actually clawed back anyone’s vested equity.
As the AI revolution charges forward, the internal strife and whistleblower demands at OpenAI highlight the growing pains and unresolved ethical quandaries surrounding the technology.
Read more: Verses AI raises CAD$10M in private placement and leans into AI product, Genius
Read more: Verses announces Genius public beta preview and webinar June 20
Deepfakes are a rising concern
The risks of artificial intelligence have already started to cause problems.
For example, students are rapidly discovering the ease with which AI can generate nefarious content, creating a new world of bullying that schools and the law are not fully prepared for. Educators are witnessing the creation of deepfake sexual images of their students, along with sham voice recordings and videos, which pose a looming threat. Advocates are sounding the alarm on the potential damage, as well as on gaps in both the law and school policies.
According to the BBC, a mother in Pennsylvania allegedly created AI images of her daughter’s cheerleading rivals naked and drinking at a party before sending them to their coach.
These new, vicious uses of AI are posing challenges for schools. The facility in Pennsylvania, for example, could not independently verify that the images were fake and had to seek assistance from the police.
Even experts in the field are just beginning to comprehend AI’s destructive potential in school environments, such as the schoolyard or the locker room.

Image via the World Economic Forum.
In February, the Federal Trade Commission proposed new protections against AI impersonations, aiming to ban deepfakes entirely. Concurrently, the Department of Justice established an artificial intelligence officer position to enhance its understanding of this emerging technology.
Recently, a bipartisan group of lawmakers, with the endorsement of Senate Majority Leader Chuck Schumer (D-N.Y.), released a report outlining Congress’s necessary actions regarding AI, including addressing the challenges posed by deepfakes.
AI companion apps downloaded 225m times
The latest developments in AI are also extending into new dangerous and ethically questionable vistas.
For example, a first-of-its-kind cyber brothel is set to open in Berlin, allowing customers to book hour-long sessions with AI-powered sex dolls for both verbal and physical intimacy.

Replika AI companions. Image by Replika.
This launch exemplifies the growing influence of AI in the adult entertainment industry. AI companion apps are experiencing a surge in downloads, but concerns are being voiced about potential downsides.
“Many people feel more comfortable sharing private matters with a machine because it doesn’t judge,” says Philipp Fussenegger, founder and owner of Cybrothel.
“Previously, there was significant interest in a doll with a voice actress, where users could only hear the voice and interact with the doll. Now, there is an even greater demand for interacting with artificial intelligence.”
AI companion apps were downloaded 225 million times from the Google Play Store, according to analysis from global software firm, SplitMetrics.
Dr. Kerry McInerney, senior research fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, warns that the merger of AI and the adult entertainment business has set off alarm bells due to the bias inherent in generative AI. This bias could lead to the encoding of retrograde gender stereotypes about sex and pleasure into sex chatbots.
Additionally, AI chatbots could be addictive, particularly for those struggling with loneliness. Some AI chatbots also contain disturbing content, including themes of abuse, violence, and underage relationships.
Read more: Verses AI onboards chief product officer in push for AI product Genius
Read more: Verses AI raises CAd$10M in private placement and leans into AI product, Genius
Artificial general intelligence represents AI problems at scale
In November 2021, UNESCO adopted the first-ever global standard on AI ethics, the ‘Recommendation on the Ethics of Artificial Intelligence,’ which was also adopted by all 193 Member States. The cornerstone of this framework is the protection of human rights and dignity, emphasizing fundamental principles such as transparency and fairness, and highlighting the importance of human oversight of AI systems.
“In no other field is the ethical compass more relevant than in artificial intelligence,” said Gabriela Ramos, Assistant Director-General for Social and Human Sciences of UNESCO. “AI technology brings major benefits in many areas, but without the ethical guardrails, it risks reproducing real world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms.”
This becomes extremely important given the potential offered by the development of artificial general intelligence (AGI).
OpenAI has refined its ethical position over the past two years, incorporating feedback from many internal and external sources. While the timeline to AGI remains uncertain, it states that it will use its charter as a guide in how to act in the best interests of humanity throughout its development.
The company’s overall mission is to ensure that AGI—meaning highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.
Meanwhile, Verses AI (CBOE: VERS) (OTCQB: VRSSF) advocates for a hyper-integrated and ethically-aligned community where humans, machines, and intelligent systems coexist.
This vision is fostered by a commitment to ethical ideals that permeate the company’s culture and product development. The company believes the future lies in a collaborative ecosystem of distributed intelligence, a concept they refer to as the Spatial Web, built upon a foundation of safety and trust.
Verses AI is a sponsor of Mugglehead news coverage
.
