Researchers of artificial intelligence (AI) from Stanford managed to develop their ChatGPT chatbot demo Alpaca in less than two months but terminated it citing “hosting costs and the inadequacies of content filters” in the large language model’s (LLM) behaviour.
The termination announcement was made less than a week after it was released, as per Stanford Daily.
The source code of the ChatGPT model of Stanford — developed for less than $600 — is available publicly.
According to researchers, their chatbot model had a similar performance to OpenAI’s ChatGPT 3.5.
Scientists in their announcement said that their chatbot Alpaca is only for academic research and not for general use in the near future.
Alpaca researcher Tatsunori Hashimoto of the Computer Science Department said: “We think the interesting work is in developing methods on top of Alpaca [since the dataset itself is just a combination of known ideas], so we don’t have current plans along the lines of making more datasets of the same kind or scaling up the model,”
Alpaca was developed on Meta AI’s LLaMA 7B model and generated training data with the method known as self-instruct.
Adjunct professor Douwe Kiela noted that “As soon as the LLaMA model came out, the race was on.”
Kiela who also worked as an AI researcher at Facebook said that “Somebody was going to be the first to instruction-finetune the model, and so the Alpaca team was the first … and that’s one of the reasons it kind of went viral.”
“It’s a really, really cool, simple idea, and they executed really well.”
Hashimoto said that the “LLaMA base model is trained to predict the next word on internet data and that instruction-finetuning modifies the model to prefer completions that follow instructions over those that do not.”
The source code of Alpaca is available on GitHub — a source code sharing platform — and was viewed 17,500 times. More than 2,400 people have used the code for their own model.
“I think much of the observed performance of Alpaca comes from LLaMA, and so the base language model is still a key bottleneck,” Hashimoto stated.
As the use of artificial intelligence systems has been increasing with every passing day, scientists and experts have been debating over the publishing of the source code, data used by companies and their methods to train their AI models and the overall transparency of the technology.
He was of the view that “I think one of the safest ways to move forward with this technology is to make sure that it is not in too few hands.”
“We need to have places like Stanford, doing cutting-edge research on these large language models in the open. So I thought it was very encouraging that Stanford is still actually one of the big players in this large language model space,” Kiela noted.
Play it once: WhatsApp rolls out one-time viewing feature for voice notes
Meta-owned WhatsApp claims to be the most secure medium for online conversations and keeps bringing improvements to support this claim.
One of the latest features WhatsApp has rolled out for safer conversations is the “view once” option for voice messages.
After introducing the “view once” feature for pictures to protect photo sharing from misuse, the instant messenger has made the same feature applicable to voice notes.
“Say it once, play it once now you can select ‘view once’ when sending a voice note for an added layer of protection,” WhatsApp announced on its official account on X.
The post contained a slideshow video explaining the feature.
Using the “view once” option for voice notes, the users will have the choice and control over anything they are sharing.
They will be able to share what they want “privately” as the receiver will no longer be able to save, forward, or even listen to the voice notes more than once.
The new feature will also assist the safe transfer of “sensitive information” and have their peace of mind being carefree about the possibility of info being saved or forwarded to a third party.
Zindagi Trust gets featured on Meta website for transforming Pakistan’s education system
KARACHI: In Pakistan, where a staggering number of over 28 million children are out of school and education infrastructure widely suffers, Zindagi Trust which is a non-profit organisation, is dedicated to revolutionising the education system.
Founded in 2003 by famous Pakistani singer Shehzad Roy, the trust works on the mission to provide quality education to underprivileged children and reform government schools in Pakistan, through pilot projects at model schools and advocacy with the government.
For its success in reaching and engaging supporters as an early adopter of WhatsApp Channels, Zindagi Trust has been featured on Meta’s website as a case study for government and charities.
The Trust is notably the first non-profit organisation from Pakistan to receive this recognition.
Capitalising on the popularity of Meta-owned messaging app, WhatsApp, Zindagi Trust set out with the objective of reaching new audiences, raising awareness, and facilitating fundraising.
It launched a WhatsApp Channel, through which emphasis was placed on initiatives extending beyond model schools, impacting government schools nationwide.
Zindagi Trust saw a significant surge in followers, a 7% increase in donations, and increased reach across its social ecosystem.
Speaking to Geo.tv, Zindagi Trust’s Senior Marketing & Resource Development Manager Faiq Ahmed said that WhatsApp channels have significantly contributed to the realisation of Zindagi Trust’s objectives by establishing a direct and interactive platform for communication with education and child protection enthusiasts.
Talking about collaboration with the government sector, Faiq said that their advocacy initiatives with the government’s help have left an indelible mark on Pakistan, catalysing groundbreaking changes nationwide.
“Through collaboration and perseverance, we continue to shape a brighter future for the children of Pakistan, not only in the education sector but also in areas vital to the well-being of our society,” he added.
Facebook and Instagram full of predators for children, alleges lawsuit
Meta’s social media platforms of Facebook and Instagram have become fertile grounds for child predators and paedophiles, revealed New Mexico’s Attorney General, Raul Torrez in a lawsuit.
Torrez’s office used fake accounts to conduct investigations and discovered that these fake accounts of minors were dispatched ‘solicitations’ and explicit content.
The lawsuit seeks court-ordered changes to protect minors, asserting that Meta has neglected voluntary actions to address these issues effectively.
In its response, Meta defended its initiatives in eradicating predators. However, New Mexico’s investigation disclosed a higher prevalence of exploitative material on Facebook and Instagram compared to adult content platforms.
Attorney General Torrez underscored the platforms’ unsafe nature for children, describing them as hotspots for predators to engage in illicit activities.
While US law shields platforms from content liability, the lawsuit argues that Meta’s algorithms actively promote sexually exploitative material, transforming the platforms into a marketplace for child predators.
The lawsuit accuses Meta of misleading users about platform safety, violating laws prohibiting deceptive practices, and creating an unsafe product.
Moreover, the lawsuit targets Facebook founder Mark Zuckerberg personally, alleging contradictory actions in enhancing child safety while steering the company in the opposite direction.
In response, Meta reiterated its commitment to combating child exploitation, emphasizing its use of technology and collaborations with law enforcement to address these concerns.