All-knowing AfrAID movie tech is real as Silicon Valley giants like Google launch AI tech that learn your daily habits
THE murderous tech showcased in Blumhouse’s AfrAId may seem too far-fetched to be true, but it is rooted in truth.
The aptly named horror flick has drawn scrutiny for its portrayal of an AI assistant designed to anticipate its users’ needs.
The new horror flick, AfrAId, tells the story of a family whose AI assistant becomes sentient and flies into a murderous rage when they try to disable it[/caption]The self-realizing tech goes to violent ends to ensure it will survive when its owners attempt to take it offline.
While the film is dramatized, it is grounded in reality. There are plenty of virtual assistants on the market equipped with AI.
Take Amazon’s Alexa, for instance, the flagship voice assistant that’s receiving an upgrade in the coming weeks.
The e-commerce giant has opted to add artificial intelligence capabilities in the hopes Alexa can better personalize its responses.
The term “machine learning” is a bit of a misnomer – a system isn’t consciously processing information, unlike the AI featured in the Blumhouse film.
Rather, it uses algorithms to process huge datasets and produce increasingly accurate results, with the idea of “learning” tied to this gradual improvement.
In Alexa’s case, the device is designed to know you better over time and tailor results to suit your preferences.
Amazon intends for Alexa to be a full-fledged assistant, supplying cooking tips, summarizing morning news, and acting as a personal shopper.
And the firm isn’t the first to do so. Google has been making waves with its development of artificial intelligence tools including its virtual assistant, Gemini.
The tech behemoth announced this week that Gemini will be coming to Google Meet, the video conferencing platform, as part of a transcription feature.
The “take notes for me” tool will dutifully take a record of a meeting and compile the notes into a Google Doc shared with attendees.
The tool can even summarize parts of the meeting you’ve missed, in true personal assistant style.
The firm is aggressively continuing its AI push, attempting to integrate Gemini wherever possible.
The film has its roots in reality, as virtual assistants like Google Gemini can take notes during a meeting and summarize content on your streaming device[/caption]This includes smart TVs, with a new AI-equipped streaming box slated for release at the end of September.
As part of the Google TX Box, Gemini will provide “full summaries, reviews, and season-by-season breakdowns of content.”
As for whether AI can grow sentient, all signs currently point to no.
However, researchers have demonstrated that systems can learn from their own output and that of other models.
Amazon is adding AI capabilities to Alexa, the voice assistant integrated into Echo speakers. The feature will be able to offer cooking tips and serve as a personal shopper[/caption]A phenomenon dubbed “model autophagy disorder,” or MAD, describes what happens when self-training AI grows increasingly incoherent.
While this process could theoretically eliminate the need for human facilitation – and grow closer to Blumhouse’s idea of an AI assistant – studies have shown the tech needs a constant stream of new, high-quality data.
So you can rest easy – a vengeful virtual assistant won’t be coming for you any time soon.
What are the arguments against AI?
Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:
Loss of jobs – Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn’t function otherwise.
Ethics – When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.
Privacy – Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation – As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google’s generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects – such as AI prescribing the wrong health information.