AI News Essentials

OpenAI faces complaint over fictional outputs

Artificial intelligence (AI) company OpenAI is facing scrutiny over the accuracy and reliability of ...

Artificial intelligence (AI) company OpenAI is facing scrutiny over the accuracy and reliability of its AI model, ChatGPT. On April 29, 2024, European data protection advocacy group noyb filed a complaint against OpenAI, alleging that its AI model generates fictional outputs and fails to comply with the General Data Protection Regulation (GDPR) in the European Union.

The complaint highlights a fundamental issue with ChatGPT's output: its potential to invent false information. Maartje de Graaf, a Data Protection Lawyer at noyb, stated that 'making up false information is quite problematic', especially when it involves individuals' personal data. The group provided an example where ChatGPT repeatedly provided an incorrect date of birth for a complainant, a public figure, despite requests for rectification.

Under the GDPR, individuals have the right to rectification and access regarding their personal data. However, OpenAI has admitted that it cannot correct incorrect information generated by ChatGPT or disclose the sources of its training data. This inability to ensure data accuracy and transparency has led to the complaint, which calls for an investigation into OpenAI's data processing practices and measures to ensure compliance with the GDPR.

The issue of AI models generating false information is not new, with a New York Times report finding that chatbots like ChatGPT 'invent information' up to 27 percent of the time. The European privacy watchdogs have already taken action, with the Italian Data Protection Authority restricting ChatGPT in March 2023 and the European Data Protection Board establishing a task force.

In their defense, OpenAI has argued that 'factual accuracy in large language models remains an area of active research'. However, the potential consequences of false information, especially when it involves individuals, are serious. This complaint against OpenAI underscores the ongoing challenges and ethical considerations in the development and deployment of AI models, particularly in ensuring data accuracy and transparency.

Published on: May 3, 2024

Source: Artificial Intelligence News

AI is Making Online Casinos Safer Than Ever

The integration of AI in online casinos is making virtual gambling safer and more secure than ever b...

The integration of AI in online casinos is making virtual gambling safer and more secure than ever before. AI algorithms can analyze player behavior, detect patterns, and identify suspicious activities, thereby mitigating risks and ensuring a fair and secure gaming environment for players. This technology is a game-changer in fraud prevention, with the ability to instantly flag anomalies and protect users' financial and personal data. Beyond security, AI also enhances user experiences through personalized game recommendations, 24/7 customer support via chatbots, and immersive, realistic gaming experiences. While concerns about the ethical implications of AI in gaming exist, the technology is largely seen as a positive development, fostering trust and promoting responsible gambling practices. As AI continues to evolve, online casinos are expected to become even more secure and tailored to individual players' needs and preferences.

Published on: May 3, 2024

Source: Artificial Intelligence News

FT Partners with OpenAI Amid Web Scraping Criticism

The Financial Times (FT) has entered into a strategic partnership with OpenAI, licensing its content...

The Financial Times (FT) has entered into a strategic partnership with OpenAI, licensing its content for AI model development and training. This move comes despite ongoing criticism of OpenAI's web scraping practices, where they have been accused of using unlicensed content to train their AI chatbot, ChatGPT. FT becomes the latest media company, including global news publisher Axel Springer, France's Le Monde, and Spain-based Prisa Media, to partner with OpenAI.

Through this partnership, ChatGPT users will be able to access select attributed summaries, quotes, and links to FT journalism in response to their queries. FT Group CEO John Ridding emphasized the value of their journalism and the importance of AI platforms compensating publishers for their content. OpenAI COO Brad Lightcap shared enthusiasm for the partnership, stating that it will empower news organizations and journalists while enriching the ChatGPT experience for users worldwide.

OpenAI has faced legal challenges, including a lawsuit by the New York Times, alleging the unauthorized use of their articles to train the chatbot. The opt-out approach of their web crawler has also raised concerns among technology ethicists. Despite this, OpenAI's ChatGPT remains the most-used LLM globally, and their new web crawler may further advance their models' capabilities.

Published on: May 3, 2024

Source: Artificial Intelligence News

Coalition of News Publishers Sue Microsoft and OpenAI

In a recent development in the ongoing debate over the use of online data for artificial intelligenc...

In a recent development in the ongoing debate over the use of online data for artificial intelligence, a coalition of news publishers has taken legal action against tech giants Microsoft and OpenAI. On May 1, 2024, eight prominent US newspaper publishers, including The New York Daily News, The Chicago Tribune, and The Denver Post, jointly filed a lawsuit in a New York federal court, accusing the companies of copyright infringement. The suit claims that OpenAI and Microsoft's generative AI products, ChatGPT and Microsoft Copilot, have been trained on and fed millions of copyrighted news articles without permission or proper compensation. This practice, the publishers argue, has negatively impacted their revenue streams and subscription-based business models, while also tarnishing their reputation through false attributions and inaccurate information.

According to the complaint, ChatGPT and Copilot have reproduced and regurgitated significant portions of their articles, often without providing prominent links back to the original sources. This has reduced the incentive for readers to subscribe to local newspapers, as they can access the full text of articles through the chatbots. The lawsuit seeks a jury trial and compensation for the use of the publishers' content, but does not specify a monetary amount. While Microsoft has declined to comment, OpenAI has stated that it takes great care in supporting news organizations and is engaged in partnerships to explore opportunities.

The lawsuit adds to the growing legal challenges faced by OpenAI and Microsoft, with The New York Times having filed a similar suit in December 2023. The outcome of these cases will have significant implications for the news industry and how AI companies utilize copyrighted content in the future, potentially reshaping the way news companies are compensated in the AI era.

Published on: May 03, 2024

Source: Artificial Intelligence News

AI Networks Are More Vulnerable to Malicious Attacks Than Previously Thought

A recent study reveals that AI networks are highly susceptible to malicious attacks, raising concern...

A recent study reveals that AI networks are highly susceptible to malicious attacks, raising concerns about their security and potential risks to human lives. Artificial Intelligence (AI) tools have been touted for their potential in various sectors, from autonomous vehicles to medical image interpretation. However, a recent study by researchers at North Carolina State University has uncovered a disturbing vulnerability in these AI systems. The study, led by co-author Tianfu Wu, an associate professor of electrical and computer engineering, found that AI tools are highly vulnerable to targeted attacks designed to manipulate their decision-making.

Adversarial attacks, as they are known, involve manipulating the data fed into an AI system to confuse and deceive it. For instance, a specific sticker on a stop sign could render it invisible to an AI system, or altered X-ray data could lead to incorrect diagnoses.

The study focused on understanding the prevalence of these vulnerabilities in AI deep neural networks and found that they were far more common than expected. Attackers can exploit these vulnerabilities to make the AI interpret data as they wish, which raises serious safety concerns.

To address this issue, the researchers developed QuadAttacK, a software tool to identify adversarial vulnerabilities in any deep neural network. They were surprised to find that widely used networks like ResNet-50 and DenseNet-121 were highly susceptible to attacks.

The researchers have made QuadAttacK publicly available to aid the AI community in testing neural networks. While they are working on solutions to minimize vulnerabilities, the findings underscore the urgent need for improved AI security measures to protect against potential threats.Artificial Intelligence (AI) tools have been touted for their potential in various sectors, from autonomous vehicles to medical image interpretation. However, a recent study by researchers at North Carolina State University has uncovered a disturbing vulnerability in these AI systems. The study, led by co-author Tianfu Wu, an associate professor of electrical and computer engineering, found that AI tools are highly vulnerable to targeted attacks designed to manipulate their decision-making.

Adversarial attacks, as they are known, involve manipulating the data fed into an AI system to confuse and deceive it. For instance, a specific sticker on a stop sign could render it invisible to an AI system, or altered X-ray data could lead to incorrect diagnoses.

The study focused on understanding the prevalence of these vulnerabilities in AI deep neural networks and found that they were far more common than expected. Attackers can exploit these vulnerabilities to make the AI interpret data as they wish, which raises serious safety concerns.

To address this issue, the researchers developed QuadAttacK, a software tool to identify adversarial vulnerabilities in any deep neural network. They were surprised to find that widely used networks like ResNet-50 and DenseNet-121 were highly susceptible to attacks.

The researchers have made QuadAttacK publicly available to aid the AI community in testing neural networks. While they are working on solutions to minimize vulnerabilities, the findings underscore the urgent need for improved AI security measures to protect against potential threats.

Published on: May 1, 2024

Source: Science Daily

AI Predicted to Address Labor Shortages and Boost Efficiency

As of late, there has been a growing emphasis on AI as a potential solution to labor shortages faced...

As of late, there has been a growing emphasis on AI as a potential solution to labor shortages faced by numerous industries. The COVID-19 pandemic, economic shifts, and technological advancements have all contributed to a complex landscape of challenges and opportunities. AI is now at the forefront of discussions regarding the future of work, with predictions and expectations for its impact on various sectors.

The labor shortage, exacerbated by the pandemic, has resulted in a unique scenario where there are both vacant jobs and layoffs occurring simultaneously. This paradoxical situation has prompted companies to explore innovative ways to adapt and survive.

One of the primary strategies being adopted is the utilization of AI and automation to improve operational efficiency and reduce costs. By deploying AI tools, companies aim to streamline repetitive and time-consuming tasks, freeing up human resources for more value-adding responsibilities. This approach not only helps address labor shortages but also improves overall productivity and efficiency.

AI is particularly beneficial in sectors such as manufacturing, healthcare, logistics, and hospitality, where skilled labor shortages have been acute. For example, AI-based chatbots in customer service can enhance the customer experience and reduce the workload for human agents. Similarly, in manufacturing, AI can be used for predictive analytics, proactive equipment maintenance, and quality control, reducing the need for manual labor and improving output.

Additionally, AI is playing a crucial role in talent retention and employee satisfaction. By automating tedious tasks, companies demonstrate respect for their employees' time and well-being, leading to higher satisfaction and retention rates. Furthermore, AI tools can be leveraged to create dynamic schedules for remote workers, ensuring productivity while also accommodating their personal lives.

AI is also creating new job opportunities, particularly in fields related to data analysis, machine learning, digital marketing, and process automation. As AI continues to evolve and advance, the demand for skilled professionals in these areas is expected to grow.

However, it is important to acknowledge the potential drawbacks of over-reliance on AI and automation. These include the lack of a personal touch, increased exposure to cyber threats, and the exacerbation of the existing skills gap. As such, finding the right balance between leveraging advanced technologies and ensuring a skilled workforce remains a critical challenge for organizations.

In conclusion, while AI has the potential to address labor shortages and improve efficiency, it also presents ethical and economic considerations that must be carefully navigated. As we move forward into an era of increasing automation, finding this balance will be essential to ensure that the benefits of AI are shared by all stakeholders, including businesses, employees, and society at large.

Published on: April 30, 2024

Source: Tech Insider

Alphabet hails AI as Meta loses billions

In a recent turn of events, Meta has lost almost $200 billion in value after Mark Zuckerberg's annou...

In a recent turn of events, Meta has lost almost $200 billion in value after Mark Zuckerberg's announcement regarding investments in AI. This has sparked fears among investors about the company's huge investments in artificial intelligence and whether they will pay off. On the other hand, Alphabet, the parent company of Google, has reported a 15% rise in revenue, reaching $80.5 billion. This surge is attributed to its focus on AI opportunities and issuing its first-ever dividend. Sundar Pichai, CEO of Alphabet, hailed the transition to AI as a 'once-in-a-generation opportunity' as the company races to integrate AI across its business. While Meta's value plummeted, shares in Alphabet surged, highlighting the contrasting fortunes of these tech giants.

Published on: April 28, 2024

Source: The Guardian

AI is expected to transform the film industry

AI is already transforming the film industry, with its use in production pipelines sparking exciteme...

AI is already transforming the film industry, with its use in production pipelines sparking excitement and controversy in equal measure. Recent releases like 'Indiana Jones and the Dial of Destiny' have brought the discussion around AI in film to the forefront, with the movie's de-aging of Harrison Ford sparking debates about the future of the technology in cinema. The film industry is experiencing a period of significant transformation as AI continues to make inroads into various aspects of filmmaking. The use of AI in production pipelines has sparked a range of reactions, from excitement about the new possibilities it offers to concerns about its potential impact on jobs and creativity.

One of the most notable examples of AI in recent film releases is the use of de-aging technology on Harrison Ford in 'Indiana Jones and the Dial of Destiny'. This has sparked debates and discussions about the future of AI in cinema, with some praising the technology as impressive and others finding it uncanny and unsettling.

The de-aging process involved using AI to comb through decades of old footage of Ford, allowing the filmmakers to recreate his younger appearance from the 1980s. While some viewers found the effect impressive, others criticized it as distracting and unnatural, particularly when combined with Ford's older voice.

The use of AI in film has become an increasingly contentious issue, with strikes by actors and writers' unions highlighting concerns about its potential impact on jobs and creativity. Many in the industry feel threatened by the rapid advancements in AI technology, including deepfake and generative AI tools, which have the potential to disrupt traditional filmmaking roles.

However, supporters of AI in film argue that it can democratize filmmaking by making it more accessible, efficient, and cost-effective. They believe that AI can enhance human creativity and productivity, particularly in areas like scriptwriting, pre-production planning, and special effects.

As AI technology continues to evolve and improve, the film industry is at a crossroads. While some filmmakers embrace the new possibilities offered by AI, others remain cautious or skeptical, highlighting the importance of regulating the technology to protect jobs and creativity. The debate around AI in film is likely to continue as the technology advances, shaping the future of filmmaking in ways that are yet to be fully understood.

Published on: April 28, 2024

Source: MIT Technology Review

AI Could Widen Inequality, Warns White House Report

In a recent report, the White House has raised concerns about the potential impact of AI on US worke...

In a recent report, the White House has raised concerns about the potential impact of AI on US workers, particularly those with less education and lower incomes. The report, shared first with CNN, estimates that about 10% of US workers are in jobs that face the highest risk of disruption from rapidly evolving AI. This highlights the risk of AI amplifying existing inequalities, with lower-income and less-educated workers being especially vulnerable to displacement.

The findings are part of the Council of Economic Advisers' annual Economic Report of the President, which dedicates a chapter to AI and its potential effects on the workforce. Jared Bernstein, chair of the council, draws a parallel to health risks, asking who is most at risk and how they can be protected. The report reflects the White House's proactive approach to guiding AI development to benefit workers.

While AI has the potential to complement some jobs, it may also displace others. The report acknowledges the evolving nature of AI and includes caveats about the uncertainty of its future impact. Generative AI, for example, is already capable of tasks previously unique to humans, such as humorous writing and image creation. The report also found that 20% of workers are in high-exposure AI occupations, with similar results observed by the Pew Research Center for 2022.

Despite the risks, the White House remains committed to implementing policies that mitigate the potential negative consequences of AI on workers' lives. The Biden administration aims to reduce the risk of AI-induced job displacement and prevent technological advancements from solely dictating societal inequality.

The White House report underscores the complex nature of the issue, recognizing the potential benefits and drawbacks of AI for the US workforce. As AI continues to evolve, policymakers must carefully consider their strategies to ensure a positive future for all.

Published on: April 28, 2024

Source: CNN Business

Beijing to subsidize domestic AI chips, targeting self-reliance by 2027

Beijing city authorities have announced subsidies for companies that purchase domestically-produced ...

Beijing city authorities have announced subsidies for companies that purchase domestically-produced artificial intelligence (AI) chips, as part of a drive to develop China's semiconductor industry and reduce its reliance on foreign technology. The initiative, outlined in a document by the Beijing Municipal Bureau of Economy and Information Technology dated April 24, did not specify the size of the subsidies. However, it stated that companies investing in domestically controlled GPU chips for intelligent computing services will receive support based on a certain percentage of their investment. China is cultivating its own AI chip industry, with the Ascend 910 chips of Huawei Technologies seen as a potential rival to products from US-based Nvidia. This move comes as the US tightens restrictions on exporting advanced computing products to China, citing national security concerns. Beijing's initiative targets 100% self-reliance in smart computing infrastructure hardware and software by 2027, with government-related entities known as 'intelligent computing centers' being the major buyers of domestic AI chips so far.

Published on: April 27, 2024

Source: Reuters