
Some users of ELON Musk X are heading to the AI Bot Grok of the MUSK to confirm the facts, and they are concerned among the human factors that they can provide wrong information.
Earlier this month, X allowed users to call XAI’s Grok and ask questions about other things. This movement was similar to Perplexity, which provides a similar experience by running an automated account in X.
Immediately after XAI created the Grok’s automatic account in X, the user started the experiment to ask questions. Some people in the market, including India, have begun to confirm the opinions and questions that aim for Grok for specific political beliefs.
In fact, the inspection machine is concerned about the use of Grok (or other AI assistants of this kind) because the bot can answer the convincing sound even if the bot is not actually accurate. Examples of fake news and wrong information have been seen with Grok in the past.
In August last year, five state secretaries urged the MUSK to fulfill the significant change of Grok after the misunderstanding of the assistant who appeared in the social network before the US election.
Other chatbots, including Openai’s CHATGPT and Google’s Gemini, also appear to generate inaccurate information about the election last year. Separately, Disinformation researchers found that in 2023, AI chatbots, including CHATGPT, could be easily used to create a convincing text with a misunderstanding narrative.
Angie Holan, director of Poynter’s International Fact-Kecking Network (IFCN), said, “We are good at using natural languages such as Grok, and we really use the answers that we sound as if we were told by humans, and we really respond as if we were told by humans.
Unlike AI assistants, human factors confirm the information using a variety of trusted sources. It is also responsible for the results of the attached name and organization to ensure reliability.
PRATIK SINHA, co -founder of India’s non -profit website ALT NEWS, seems to have a convincing answer, but this is as good as the data provided.
“The interference of the person and the government will come into the painting,” he pointed out, “he pointed out.
“There is no transparency. Lack of transparency will cause damage because everything that lacks transparency can be molded in any way.”
“Oil -to spread the wrong information”
In one of the response posted earlier this week, Grok’s account for X said, “I admitted that it could be misused to spread the wrong information and violate personal information.
However, the automated account does not display the immunity to the user when receiving the answer, so if the answer is hallucinated, it is a potential disadvantage of AI.
Anushka Jain, a researcher at the Digital Futures Laboratory, based on GoA, told TechCrunch, “You can organize information to provide a response.”
There are also questions about the amount of Grok’s use of X’s posts as training data and what quality management to check for such posts. Last summer, Grok basically forced a change in which X user data can be consumed.
Through the social media platform, unlike the CHATGPT or the other chatbots that are used personally in relation to the areas of AI assistants such as GrOK, public information is disclosed.
Even if the user knows that the information obtained from the assistant is misleading or is not completely accurate, others can still believe it.
This can cause serious social damage. The case appeared in India when the previous wrong information was circulated over whatsApp and led to the mob. However, this serious event occurred before Genai arrived, making the creation of synthetic content much easier and more realistic.
“If you see a lot of groK answers, most people may be right, but it’s going to be wrong, and how many parts will be, and it’s not a small fountain, and some studies have shown that the AI model can cause a 20% error.
AI vs. real fact confirmation
AI companies, including XAI, modify the AI model to communicate similarly to humans, but still cannot replace humans.
In the last few months, technology companies have been looking for ways to reduce their dependence on human factors. The platform, including X and META, began to accept the concept of confirmation of the new crowdsourcing facts through the so -called community notes.
Naturally, such changes are actually concerned about the inspector.
SINHA of Alt News believes that people will learn how to distinguish between machines and human facts and to consider human accuracy more importantly.
IFCN’s Holan said, “We will see the pendulum eventually turning to more facts.
But she mentioned that the facts will work more with AI production information that is rapidly spreading.
“Do you really care about the truth of this problem? Are you interested in what really is true? Are you looking for something Venier in something that doesn’t actually true and sounds and truth?”
X and XAI did not respond to our request.