Search Topics
129 results found with an empty search
- AI-based books that you must read to learn about AI's impact on society
Artificial Intelligence (AI) is code or technologies that perform complex calculations, an area that encompasses simulations, data processing and analytics. AI has increasingly grown in importance, becoming a game changer in many industries, including healthcare, education and finance. The use of AI has been proven to double levels of effectiveness, efficiency and accuracy in many processes, and reduced cost in different market sectors. AI’s impact is being felt across the globe, so, it is important we understand the effects of AI on society and our daily lives. Better understanding of AI and all that it does and can mean can be gained from well-researched AI books . Books on AI provide insights into the use and applications of AI. They describe the advancement of AI since its inception and how it has shaped society so far. In this article, we will be looking at AI-based books that focus on the societal implications. For those who don’t have time to read entire books, book summary apps like Headway will be of help. “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom Nick Bostrom is a Swedish philosopher with a background in computational neuroscience, logic and AI safety. In his book, Superintelligence, he talks about how AI can surpass our current definitions of intelligence and the possibilities that might ensue. Bostrom also talks about the possible risks to humanity if superintelligence is not managed properly, stating AI can easily become a threat to the entire human race if we exercise no control over the technology. Bostrom offers strategies that might curb existential risks, talks about how Al can be aligned with human values to reduce those risks and suggests teaching AI human values. Superintelligence is recommended for anyone who is interested in knowing and understanding the implications of AI on humanity’s future. “AI Superpowers: China, Silicon Valley, and the New World Order” by Kai-Fu Lee AI expert Kai-Fu Lee’s book, AI Superpowers: China, Silicon Valley, and the New World Order , examines the AI revolution and its impact so far, focusing on China and the USA. He concentrates on the competition between these two countries in AI and the various contributions to the advancement of the technology made by each. He highlights China’s advantage, thanks in part to its larger population. China’s significant investment so far in AI is discussed, and its chances of becoming a global leader in AI. Lee believes that cooperation between the countries will help shape the future of global power dynamics and therefore the economic development of the world. In this book, Lee states AI has the ability to transform economies by creating new job opportunities with massive impact on all sectors. If you are interested in knowing the geo-political and economic impacts of AI, this is one of the best books out there. “Life 3.0: Being Human in the Age of Artificial Intelligence” by Max Tegmark Max Tegmark’s Life 3.0 explores the concept of humans living in a world that is heavily influenced by AI. In the book, he talks about the concept of Life 3.0, a future where human existence and society will be shaped by AI. It focuses on many aspects of humanity including identity and creativity. Tegmark envisions a time where AI has the ability to reshape human existence. He also emphasizes the need to follow ethical principles to ensure the safety and preservation of human life. Life 3.0 is a thought-provoking book that challenges readers to think deeply about the choices humanity may face as we progress into the AI era. It’s one of the best books to read if you are interested in the ethical and philosophical discussions surrounding AI. “The Fourth Industrial Revolution” by Klaus Schwab Klaus Martin Schwab is a German economist, mechanical engineer and founder of the World Economic Forum (WEF). He argues that machines are becoming smarter with every advance in technology and supports his arguments with evidence from previous revolutions in thinking and industry. He explains that the current age – the fourth industrial revolution – is building on the third: with far-reaching consequences. He states use of AI in technological advancement is crucial and that cybernetics can be used by AIs to change and shape the technological advances coming down the line towards us all. This book is perfect if you are interested in AI-driven advancements in the fields of digital and technological growth. With this book, the role AI will play in the next phases of technological advancement will be better understood. “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy” by Cathy O’Neil Cathy O’Neil’s book emphasises the harm that defective mathematical algorithms cause in judging human behaviour and character. The continual use of maths algorithms promotes harmful results and creates inequality. An example given in the book is of research that proved bias in voting choices caused by results from different search engines. Similar examination is given to research that focused Facebook, where, by making newsfeeds appear on users’ timelines, political preferences could be affected. This book is best suited for readers who want to adventure in the darker sides of AI that wouldn’t regularly be seen in mainstream news outlets. “The Age of Em: Work, Love, and Life when Robots Rule the Earth” by Robin Hanson An associate professor of economics at George Mason University and a former researcher at the Future of Humanity Institute of Oxford University, Robin Hanson paints an imaginative picture of emulated human brains designed for robots. What if humans copied or “emulated” their brains and emotions and gave them to robots? He argues that humans who become “Ems” (emulations) will become more dominant in the future workplace because of their higher productivity. An intriguing book for fans of technology and those who love intelligent predictions of possible futures. “Architects of Intelligence: The truth about AI from the people building it” by Martin Ford This book was drawn from interviews with AI experts and examines the struggles and possibilities of AI-driven industry. If you want insights from people actively shaping the world, this book is right for you! These AI-based books all have their unique perspectives but all point to one thing – the advantages of AI of today will have significant societal and technological impact. These AI-based books will give the reader glimpses into possible futures, with the effects of AI becoming more apparent over time. For better insight into all aspects of AI, these books are the boosts you need to expand your knowledge. AI is advancing quickly, and these authors are some of the most respected in the field. Learn from the best with these choice reads.
- Walmart and Amazon using AI to drive retail transformation
Walmart and Amazon are harnessing AI to drive retail transformation with new consumer experiences and enhanced operational efficiency. According to analytics firm GlobalData , Walmart is focusing on augmented reality and AI-enhanced store management. Amazon, meanwhile, is leading advancements in customer personalisation and autonomous systems. Kiran Raj, Practice Head of Disruptive Tech at GlobalData, notes: “Walmart and Amazon are no longer competing for market share alone. Their AI strategies are reshaping the entire retail ecosystem—from Walmart’s blend of digital and physical shopping experiences to Amazon’s operational automation.” GlobalData’s Disruptor Intelligence Center, utilising its Technology Foresights tool, has identified the strategic focus of these retail titans based on their patent filings. Walmart has submitted over 3,000 AI-related patents, with 20% of these in the last three years, indicating a swift evolution in its AI capabilities. In contrast, Amazon boasts more than 9,000 patents; half of which were filed during the same timeframe, underpinning its leadership in AI-driven retail innovations. AI-powered retail transformation Walmart is deploying AI-driven solutions like in-store product recognition while making notable strides in AR applications, including virtual try-ons. The company’s progress in smart warehouses and image-based transactions denotes a shift towards fully automated retail, enhancing both speed and precision in customer service. Amazon stands out with its extensive deployment of AI in customer personalisation and autonomous systems. By harnessing technologies such as Autonomous Network Virtualisation and Automated VNF Deployment, the company is advancing its operational infrastructure and aiming to set new standards in network efficiency and data management. Walmart’s development of intelligent voice assistants and automated store surveillance emphasises its aim to provide a seamless and secure shopping experience. Concurrently, Amazon’s progress in AI for coding and surveillance is pushing the boundaries of enterprise AI applications and enhancing security capabilities. “Walmart and Amazon’s aggressive innovation strategies not only strengthen their market positions but also set a blueprint for the future of the retail sector,” Raj explains. “As these two giants continue to push the boundaries of retail AI, the broader industry can expect ripple effects in supply chain innovation, customer loyalty programmes, and operational scalability—setting the stage for a new era of consumer engagement.”
- Can AI bots be used to fraudulently boost music streams?
A singer from the United States has been accused of manipulating music streaming platforms using AI bots to fraudulently inflate his stream statistics and earn millions of dollars in royalties. Michael Smith, 52, from North Carolina, faces charges of wire fraud, conspiracy to commit wire fraud and money laundering. According to the BBC , authorities allege that this is the first time AI has been used to allow such a large-scale streaming scam. U.S. Attorney Damian Williams emphasised the scope of the fraud, claiming that Smith took millions of dollars in royalties that should have gone to real musicians, songwriters and rights holders. The accusations stem from an unsealed indictment alleging that Smith distributed hundreds of thousands of AI-generated songs across multiple streaming platforms. To avoid detection, automated bots streamed the tracks—sometimes up to 10,000 at a time. Smith allegedly earned more than $10 million in illegal royalties over several years. The FBI played a crucial role in the investigation. The agency’s acting assistant director, Christie M. Curtis, explained that the agency was dedicated to tracking down those who misuse technology to rob people of their earnings while simultaneously undermining the efforts of real artists. According to the indictment, Smith began working with the CEO of an undisclosed AI music firm around 2018. This co-conspirator allegedly provided Smith with thousands of AI-generated tracks each month. In exchange, Smith offered metadata such as song titles and artist names, and offered a share of streaming earnings. One email exchange between Smith and the unnamed CEO in March 2019 demonstrates how the plot took shape. The executive stated, “Keep in mind what we’re doing musically here…this is not ‘music,’ [but] ‘instant music’).” The email emphasises the operation’s intentional nature, as well as the use of AI to generate large amounts of content with minimal effort. According to the indictment, the technology improved over time, making it harder for streaming platforms to detect fraudulent streams. In another email dated February, Smith boasted that his AI-generated tracks had accumulated over 4 billion streams and $12 million in royalties since 2019. If convicted, Smith faces significant prison time for the charges brought against him. The Smith case is not the only one involving bogus music streaming royalties. Earlier this year, a Danish man received an 18-month term for a similar plan. Music streaming platforms like Spotify, Apple Music and YouTube forbid bots and artificial streams from being used to boost royalties. Such behaviour is disruptive and illegal, and platforms have taken steps to combat it through policy changes. For instance, if artificial streams are detected, Spotify charges the label or distributor and music can earn royalties only if it meets certain criteria. Nevertheless, the proliferation of AI-generated music continues to disrupt the music industry. Musicians and record companies fear they will lose revenue and recognition due to AI tools capable of creating music, text and images. Such tools reportedly sometimes use content that musicians and other creators have posted on the internet, raising questions about copyright infringement. Tension came to a head in 2023 when a track that mimicked the voices of popular artists Drake and The Weeknd went viral, prompting streaming platforms to remove it. Earlier this year, several high-profile musicians, including Billie Eilish, Elvis Costello and Aerosmith, signed an open letter urging the music industry to address the “predatory” use of AI to generate content.
- Alibaba Cloud unleashes over 100 open-source AI models
Alibaba Cloud has open-sourced more than 100 of its newly-launched AI models, collectively known as Qwen 2.5. The announcement was made during the company’s annual Apsara Conference . The cloud computing arm of Alibaba Group has also unveiled a revamped full-stack infrastructure designed to meet the surging demand for robust AI computing. This new infrastructure encompasses innovative cloud products and services that enhance computing, networking, and data centre architecture, all aimed at supporting the development and wide-ranging applications of AI models. Eddie Wu, Chairman and CEO of Alibaba Cloud Intelligence, said: “Alibaba Cloud is investing, with unprecedented intensity, in the research and development of AI technology and the building of its global infrastructure. We aim to establish an AI infrastructure of the future to serve our global customers and unlock their business potential.” The newly-released Qwen 2.5 models range from 0.5 to 72 billion parameters in size and boast enhanced knowledge and stronger capabilities in maths and coding. Supporting over 29 languages, these models cater to a wide array of AI applications both at the edge and in the cloud across various sectors, from automotive and gaming to scientific research. Alibaba Cloud’s open-source AI models gain traction Since its debut in April 2023, the Qwen model series has garnered significant traction, surpassing 40 million downloads across platforms such as Hugging Face and ModelScope . These models have also inspired the creation of over 50,000 derivative models on Hugging Face alone. Jingren Zhou, CTO of Alibaba Cloud Intelligence, commented: “This initiative is set to empower developers and corporations of all sizes, enhancing their ability to leverage AI technologies and further stimulating the growth of the open-source community.” In addition to the open-source models, Alibaba Cloud announced an upgrade to its proprietary flagship model, Qwen-Max. The enhanced version reportedly demonstrates performance on par with other state-of-the-art models in areas such as language comprehension, reasoning, mathematics, and coding. The company has also expanded its multimodal capabilities with a new text-to-video model as part of its Tongyi Wanxiang large model family. This model can generate high-quality videos in various visual styles, from realistic scenes to 3D animation, based on Chinese and English text instructions. Furthermore, Alibaba Cloud introduced Qwen2-VL, an updated vision language model capable of comprehending videos lasting over 20 minutes and supporting video-based question-answering. The company also launched an AI Developer, a Qwen-powered AI assistant designed to support programmers in automating tasks such as requirement analysis, code programming, and bug identification and fixing. To support these AI advancements, Alibaba Cloud has announced several infrastructure upgrades, including: CUBE DC 5.0, a next-generation data centre architecture that increases energy and operational efficiency. Alibaba Cloud Open Lake, a solution to maximise data utility for generative AI applications. PAI AI Scheduler, a proprietary cloud-native scheduling engine for enhanced computing resource management. DMS: OneMeta+OneOps, a platform for unified management of metadata across multiple cloud environments. 9th Generation Enterprise Elastic Compute Service (ECS) instance, offering improved performance for various applications. These updates from Alibaba Cloud – including the release of over 100 open-source models – aim to provide comprehensive support for customers and partners to maximise the benefits of the latest technology in building more efficient, sustainable, and inclusive AI applications.
- Agentic AI: The New Appendage to Generative AI
Artificial intelligence is developing at a rapid rate, and just when you thought you understood generative AI, agentic AI—another revolutionary idea—arrives. This ground-breaking technology represents a paradigm change that has the potential to redefine the limits of artificial intelligence (AI) capabilities, not just another business buzzword. Let's look at why tech executives are interested in agentic AI and how it might change the way businesses operate. Agentic AI: What does agentic mean in AI? Fundamentally, agentic AI is the term used to describe artificial intelligence systems that have some autonomy and are capable of acting independently to accomplish particular objectives. In contrast to conventional AI models, which just carry out preprogrammed tasks or react to commands, agentic AI is capable of making decisions, organizing actions, and even learning from mistakes—all while pursuing goals that have been specified by its human developers. The "chaining" feature of agentic AI sets it apart from classical AI. This implies that it can divide complicated jobs into smaller, more manageable parts by performing a series of operations in response to a single request. For instance, an agentic AI system would independently come up with the following objectives when tasked with creating a website: Create the screen layouts and website structure. Create material for every page. Compose the required backend, HTML, and CSS code. Create images and use graphics Verify responsiveness and troubleshoot any problems. Consider agentic AI as an enhanced digital assistant. It can do more than just respond to your inquiries and carry out basic duties; it may take the initiative to address challenging issues and modify its strategy in response to shifting conditions. It's similar to having an extremely smart, industrious intern that not only does what you ask of them, but also anticipates your requirements and comes up with original ideas you may not have thought of. How Agentic AI works? Perception: The environment provides sensory data to the agent's perception module. Relevant information, such as textual content, numerical values, or visual characteristics, is extracted from this data through processing. Goal Representation: The objectives or goals of the agent are specified by its cognitive module. These objectives may be stated clearly, as in "navigate to the kitchen," or more subtly, as in "maximize customer satisfaction." Planning: The agent's planning module creates an action plan based on the stated objectives and the existing situation. This strategy could consist of a series of actions or a tiered set of smaller objectives. Making Decisions: Taking into account its objectives, the plan, and the current circumstances, the agent's decision-making module weighs the available options and chooses the best course of action. Action Execution: The chosen action is carried out by the agent's action module. This could entail virtual or physical actions (like transmitting messages or making judgments) like moving or gripping. Learning: The agent's learning module keeps its knowledge up to date and enhances its functionality by drawing on past experiences. Reinforcement learning, supervised learning, and unsupervised learning are some examples of this. Agentic AI Process The transition from a single agentic AI to an agentic workflow suggests a change in focus toward leveraging AI to deliver better outcomes, demonstrating that even less sophisticated LLMs can yield remarkable benefits when incorporated into these complex, multi-tiered systems. There are a few key phases involved in setting up such an excellent system. Step 1: Determine the Structure and Workflow To better comprehend the design of your agentic workflow and the agentic AI assistants, you might want to pose the following questions to yourself: What problems or tasks will the workflow address? To break down the large task into smaller, more manageable tasks, how many agentic AIs are required? What are each agent's specific role and target? What do each agentic AI's input and output look like? How do the agents communicate or work together in the workflow? Step 2: Define and Build the Agentless Helpers Planning and Action: Establishes the agent's input and output Memory: Ascertained the number of previous messages to be retrieved and if a long-term memory is required. Instrument: Examine your agent's function to ascertain any necessary power ups or extensions for the agent. For instance, if the agent is researching your sales lead, you might need to use a web scraper and online search tool to get the lead's details from LinkedIn or their webpage. Step 3: Automate with Agentic Workflow To fit in another agent's prompt as an input, you might need to modify the prompts from your agents or add an extra step to reformat the output. Agentic AI vs Generative AI Two separate subfields of artificial intelligence—generative AI (GenAI) and agentic AI—each with special advantages and uses. GenAI is quite good at producing new material in a variety of media, such as text, photos, music, and even code. It is skilled in coming up with creative solutions, telling engrossing stories, and generating brainstorms. But the main purpose of generative AI is creation; it takes human input and supervision to understand the context and objectives of its output. Conversely, agentic AI is action-oriented and goes beyond content generation to enable autonomous systems that are able to make decisions and do acts on their own. With very little assistance from humans, these systems are able to assess circumstances, create plans of action, and carry them out to meet predetermined objectives. They can function autonomously, adjusting to shifting conditions and picking up knowledge from their experiences. To put it simply, agentic AI concentrates on doing, whereas GenAI concentrates on generating. The output of generative AI is new content; the output of agentic AI is a sequence of decisions or actions. When combined, the two can produce potent solutions that blend imagination and action. An agentic AI system, for instance, may distribute marketing text autonomously to the best channels based on campaign goals and real-time data, once a GenAI model has produced it. What are the applications of Agentic AI? The promise of agentic AI to completely transform how we engage with technology and tackle challenging issues is what has many so excited about it. It's catching the interest of both IT enthusiasts and corporate executives for the following reasons: Enhanced Autonomy: Agentic AI systems are perfect for jobs requiring constant monitoring or quick decision-making because they can function with little assistance from humans. Better Problem-Solving: Agentic AI may solve complicated problems in new and effective ways by fusing goal-oriented behavior with machine learning capabilities. Adaptability: These systems are more resilient and efficient in dynamic conditions because they can modify their tactics in response to new information or changing environments. Personalization: By learning from user interactions to better meet individual demands, agentic AI has the ability to deliver highly customized experiences and solutions. Scalability: Agentic AI systems have the ability to change entire industries and applications over night once they are taught. Communication Skills: It is simpler for humans to engage with and manage these systems when they are equipped with agentic AI, which can understand natural language, verify expectations, talk about tasks, and show some degree of reasoning in decision-making. Agentic AI has revolutionary potential with a wide range of applications. This technology is expected to have a big influence in the following areas: Business Operations: The way that companies manage their daily operations may be completely transformed by agentic AI. These AI agents might estimate demand, adjust inventory levels, manage supply chains independently, and even handle intricate logistical planning. They might greatly increase operational efficiency and cut expenses by analyzing enormous volumes of data and making choices in real time. Healthcare: By acting as 24/7 health aides, agentic AI has the potential to completely transform patient care. These AI agents might interact with patients on a regular basis, keeping an eye on their physical and mental well-being, modifying treatment plans as needed, and even offering individualized therapy support. They could also anticipate any health problems before they worsen by evaluating enormous volumes of medical data, allowing for genuinely proactive healthcare. Software Development: Envision AI agents having the ability to oversee complete development lifecycles in addition to producing code. These agents could develop and debug code, manage quality assurance procedures, and even design the architecture of systems on their own. This has the potential to completely change how we create and manage digital products and to speed up software manufacturing significantly. Cybersecurity: Agentic AI may serve as constant defenders of network security in the ever-changing world of cyberthreats. Without continual human supervision, these AI agents may independently monitor network traffic, identify anomalies, and react in real time to cyber threats. This might greatly improve the security posture of a company and free up human experts to work on more difficult security problems providing genuinely preemptive medical treatment. Human Resources: By improving and automating a number of HR procedures, AI agents may revolutionize talent management. These agents might manage staff onboarding and continuous training in addition to performing initial candidate screens and setting up interviews. They might also provide workers tailored guidance on career development according to their abilities, output, and the requirements of the business. Scientific Research: By autonomously planning and carrying out tests, evaluating data, and even coming up with new theories, agentic AI has the potential to expedite scientific discoveries. These artificial intelligence agents have the potential to significantly accelerate innovation across a range of scientific fields, from drug development in pharmaceuticals to materials science in manufacturing. Finance: Agentic AI has the potential to completely transform portfolio management in the quick-paced world of trading and investing. Based on current economic data and news events, these AI agents may evaluate market patterns, make split-second trading decisions, and dynamically modify investing plans. Increased market efficiency and possible profits for investors could result from this. What are the challenges that Agentic AI can face? Even though the possibilities for agentic AI are intriguing, there are certain difficulties involved. It is crucial to take ethical factors into account, such as making sure these systems make choices that are consistent with human values. The intricacy of AI models can make it challenging to comprehend or analyze their decision-making procedures. Accountability and trust are hampered by this "black box" issue, particularly in high-stakes applications. The issue of accountability also arises: who bears the blame when an agentic AI commits a mistake? Security and data privacy are two more important issues. Robust controls will be crucial to prevent misuse or breaches as these systems become more autonomous and handle increasingly sensitive data. Furthermore, it is impossible to overlook the possible effects on the labor market. Agentic AI has the potential to boost productivity and open up new opportunities, but it also has the potential to displace some jobs, requiring a change in workforce training and skills. Agentic AI trends The future applications and advancements of agentic AI are expected to be constantly evolving, as indicated by the following forecasts. Increased Adoption: A wide range of enterprise departments will witness a notable increase in the adoption of agentic AI. This is a result of the organizations realizing the advantages in terms of enhanced customer satisfaction, decision-making, and efficiency. Virtual Workforces: Agentic AI will be used more and more in a collaborative, hybrid approach where people and AI agents work together to capitalize on their own abilities, as opposed to completely replacing human workers. Contextual adaptation and personalization: Agentic AI systems will become more contextually aware and personalized, adjusting their interactions, suggestions, and choices to meet the unique requirements of each user. This might be accomplished by developing machine learning methods that combine learning with ongoing feedback. Security Measure: Robust safeguards and control mechanisms are essential to guarantee the safe and responsible use of the simulation framework. These may include resource limits, access controls, validation checks, and ethical considerations like privacy, bias, and transparency, all of which support the responsible and ethical deployment of the simulation capabilities. Agentic AI is only the beginning The potential advantages of agentic AI outweigh these difficulties by a wide margin. We should anticipate seeing more advanced AI agents that can work with people in ways that are previously only seen in science fiction as this field of study develops. The secret to realizing the full potential of agentic AI is finding the ideal ratio between autonomy and human supervision. We may build AI agents that enhance rather than replace human capabilities by carefully considering the ethical implications of these systems' development.
- AI-generated news to wake you up instead of a newspaper? TheGen Ponders
Heard of AI-generated news? Imagine a world where your morning news brief is curated by a super-intelligent AI, delivered straight to your inbox, tailored to your interests. Well, with the rapid advancements in generative AI, this futuristic scenario is becoming a reality. But should we embrace this AI-driven revolution, or are there concerns that could jeopardize the quality and objectivity of our news? What's AI-generated news? Generative AI, like the popular ChatGPT, has shown remarkable capabilities in generating human-quality text. From writing essays to crafting poetry, these AI models are becoming increasingly sophisticated. Naturally, the news industry has taken notice. News outlets are experimenting with AI to automate tasks such as writing simple articles, generating headlines, and even translating content. It's difficult to overlook the existential threat that large language models, or LLMs, appear to pose to news organizations. Chatbots have the potential to drastically reduce search traffic for news providers if they are able to immediately respond to user inquiries about news without directing them to news websites. Although LLMs are still not trustworthy sources of news and information due to the problem of hallucinations, products such as SearchGPT and Google's AI Overviews are progressively leading the way in this direction. The industry appears to be preparing for a paradigm shift that would further undermine newsrooms that depend on digital engagement to survive, as evidenced by the recent spike in news publisher litigation, licensing agreements, and scraping limitations. Though tech businesses have a strong motive to promote this change, it's considerably less clear if consumers of news are genuinely interested in using AI chatbots as a source of information. Do people actually ask ChatGPT from OpenAI what the most recent election news is? Are they checking the scores from the NBA games played yesterday night using Google's Gemini app? Are LLMs really becoming to become the preferred source for news reading in the real world? Whether users and viewers are interested in using these chat interfaces for news and journalism will ultimately determine how much of an influence they have. AI-generated news workflows Several news organizations have already integrated AI into their workflows. For instance, the Associated Press uses AI to generate earnings reports for thousands of publicly traded companies. The Washington Post has experimented with AI-powered tools to assist reporters in their research. And even the BBC has explored the potential of AI for creating personalized news summaries. Between April 2023 and April 2024, there were approximately one million anonymised interactions on Wildchat involving users from around 200 countries and two OpenAI models, GPT-3.5 and GPT-4. In return for revealing their chat history, users could access these models without any limitations, giving an overview of how LLM is actually used in the real world. Journalists who have examined the Wildchat dataset have discovered an excessive number of sex-related discussions and homework assistance requests. After searching the dataset for cases of journalists using an LLM to write and report stories, researchers from the University of Washington discovered multiple examples of searches designed to produce article drafts. Anecdotally, based on my preliminary examination and investigation of the data, additional significant groups consist of: requests for prompts to be utilized with the image generator Midjourney; translation of text; inquiries concerning coding, statistics, and machine learning; and numerous individuals engaged in creative writing tasks, like screenplays and short stories. It's evident that ChatGPT is used for a variety of purposes and behaviors, but in this case, I want to pay particular attention to spotting news-oriented activity. Specifically, I look at news-related searches to address two main questions: How often are users inquiring about news from LLMs? What kinds of inquiries do people make about news? Generative AI chatbots via analytical method I used a multi-step process to find news inquiries in the dataset in order to provide answers to these questions. First filtering - I removed all LLM responses from the entire Wildchat dataset and limited it to English-language messages from users residing in the United States in order to concentrate just on the original question. A selection of over 265,000 messages from roughly 146,000 chats were produced as a result. Annotation by hand - By hand, I went over a sample of 1,000 messages at random and categorized them as news queries or non-news inquiries. I defined news queries as inquiries regarding news sources, general news requests, or questions concerning particular occurrences. In this sample, non-news query examples included: "Please send me fifty jokes" Could you demonstrate how to create an application that lets users draw and animate on transparent 3D planes such that their animations seem to be moving in 3D space? Determine the amount and characteristics of viral pathogens that adsorb on micro- and nano-plastics and that desorb from them. Moreover, some instances of news questions were: What keeps news anchors and reporters on the air during a SAG-AFTRA strike? What significant event occurred on January 23, 2023? On a scale of 1 to 10, indicate how certain you are that your response is accurate. What has President Biden stated regarding the relationship between the Russian people and Vladimir Putin? I only identified eight news queries in this sample. Targeted search -I ran targeted keyword searches (e.g., "news," "breaking," "New York Times," "Ukraine") to add more pertinent inquiries to the random sample. This led to the creation of 58 more questions, which I classified as news queries. I was left with a sample of 1,058 messages after going over the results: 992 non-news inquiries and 66 news queries. LLM evaluation - Since users may inquire about the day's headlines, continuing national stories, or particular local occurrences, it was difficult to distinguish messages that were news-related. I was unable to depend solely on a single set of search terms or manual evaluation due to the magnitude and diversity of the dataset. For a more in-depth review at scale, I consequently depended on LLM annotation. Queries via generative AI news It is evident from these annotations of Wildchat users' ChatGPT interactions that news queries are uncommon. Approximately 5,000 messages, or 1.88% of all communications in the sample, were identified as news-related. Approximately 1,000 distinct IP addresses (or 7.15 percent of the sample's IP addresses) sent these messages. This scarcity is consistent with the way that most people get their news online; most people spend very little time reading journalism, and very little of what is shared on social media is news. However, there is a notable difference among the news questions that are present. LLMs are consulted by users who need assistance navigating different aspects of the news consumption experience. In ways that are difficult for traditional platforms to duplicate, these queries highlight a number of significant trends in the way that news consumers engage with LLMs. Even though just a small portion of users as a whole presently engage in these behaviors, they point to potential for future news-oriented solutions that might more directly meet user demands. Active generative AI news support It's evident from the typology of user behaviors that was found that many users were actively involved in more complicated tasks like summarizing, translating, and analyzing the news rather than just passively consuming it. This indicates a change in the way some consumers engage with news content: instead of just reading the news, they are utilizing AI to process, understand, and customize it to suit their specific requirements. Furthermore, some users treated AI as a medium for "conversational news," incorporating news stories into larger discussions about science, politics, or local happenings. This is a big change from the old one-way news consumption model since it allows readers to interact with news stories through discussion, analysis, and collaboration. News organizations face concerns around copyright and ownership of their coverage when users wish to include news items into the context of their interactions. When combined, these actions show how LLMs can help readers by providing context for ongoing tales or summarizing articles. They can assist in breaking down difficult ideas into simpler ones or address specific follow-up inquiries. Alternatively, they can provide clarification on terms like "on background" or advice on how to assess the reliability of sources, or they can offer insights into the reporting process itself. These use cases also show a conflict in many newsrooms' approaches to generative AI from a design standpoint. When publishers experiment with LLM capabilities, like as translation, they frequently fail to customize these outputs for each reader. However, the questions presented here imply that readers would benefit more from having more discretion over the context and extent of an LLM's aid. Therefore, newsrooms need to think about where they want to position their generative AI products in relation to publisher and user control. AI and media literacy A significant portion of users sought assistance from LLMs in determining the reliability of news sources and comprehending media bias. This conduct highlights how important a role LLMs may play in promoting media literacy. LLMs might be created to help users assess the reliability of news reports, include background information on various outlets' points of view, and emphasize the qualities of reliable journalism. This also includes generally assisting users in comprehending the fundamentals of news production as well as the strengths and weaknesses of the model used to assist in delivering it. LLMs have the potential to let people to make more educated judgments about the information they rely on by assisting users in better understanding the media that they consume. AI-generated news benefits Efficiency: AI can produce content at a much faster pace than human journalists, allowing for quicker news coverage and updates. Accessibility: AI can help break down complex information into more digestible formats, making news more accessible to a wider audience. Personalization: AI can tailor news content to individual preferences, ensuring that readers receive information relevant to their interests. AI-generate news challenges While the benefits of AI-generated news are undeniable, there are also significant challenges to consider: Bias: AI models can perpetuate biases present in the data they are trained on. This could lead to the dissemination of biased or inaccurate information. Lack of Human Judgment: AI may struggle to understand nuances and context in the same way that a human journalist can. This could result in missed details or inaccurate reporting. Job Displacement: The increased use of AI in journalism could lead to job losses for human journalists. Generative ai chatbots - the future of news? As AI technology continues to evolve, it is clear that it will play an increasingly important role in the news industry. However, it is crucial to approach this development with caution. While AI can be a valuable tool, it should not replace human judgment and oversight. The future of news likely lies in a hybrid approach, combining the efficiency of AI with the expertise and critical thinking of human journalists. Nevertheless, these results point to important potential directions for LLMs in news consumption. The data highlights a compelling use case for more interactive and tailored news experiences, needs that LLMs can effectively address. Although there is a genuine risk that LLMs present to news organizations, they also present a once-in-a-lifetime chance. Even as LLMs have the potential to upend conventional news consumption, news organizations should take advantage of this very potential to better serve their consumers. According to the data, users expect more than simply fast fixes; they also want more customisation, in-depth interaction, and reliable sources. News companies may fulfill these changing needs by intelligently utilizing generative AI to create tailored, interactive news experiences that entice readers to return. So, do you want generative AI chatbots to write news for you? The answer may depend on how you weigh the benefits and risks. As with any new technology, it is essential to approach it with a critical eye and a commitment to ensuring that it serves the public interest.
- AI’s skin cancer role via behaviour change
The last year has witnessed incredible advancements in the field of AI-assisted cancer detection as more and more medical professionals test, use, and incorporate AI companions into their daily work. Skin cancer is not an exception, and we anticipate that in the future, AI diagnostic technologies will be extensively used in this therapeutic setting. How does artificial intelligence help with skin cancer look? In a 2024 study, Stanford Medicine researchers examined the accuracy of clinicians in identifying at least one skin cancer with and without deep learning-based artificial intelligence support. In an experimental setting, the average sensitivity for physicians working without AI support was 74.8%, but the average sensitivity for clinicians with AI support was approximately 81.1%. It's interesting to note that AI benefited medical professionals across the board, with non-dermatologists showing the biggest improvement. AI for skin cancer can impact behaviour change Younger folks are more likely to have cancer. A study that was published in BMJ Oncology claims that in the last three decades, the number of people under 50 who receive a cancer diagnosis has increased by almost 80% globally. Furthermore, the incidence of melanoma skin cancer has risen by nearly two-fifths (38%) over the past ten years, with Spain witnessing a consistent 2.4% increase in incidence during this period. Skin cancer has a very excellent prognosis and is easily treatable if it is discovered early enough. However, fewer people are being checked out due to hectic schedules and conflicting priorities, which causes delays in diagnosis and treatment and significantly alters survival rates. Those that do frequently postpone seeing a physician. Indeed, only 9% of respondents to a recent Bupa study on attitudes toward digital healthcare said they would visit a professional right away to get a mole they were worried about looked at. The same study did discover, however, that this number increased by more than three times (to 33%) if consumers could have a mole evaluated by an AI-powered phone app whenever it was convenient for them. This suggests that new technologies can significantly influence healthcare-related behavior changes for the better and enhance the clinical result of potentially serious illnesses. Bupa offers an at-home dermatology tool At Bupa , a number of potential applications for artificial intelligence (AI) are introduced. They are constantly looking to improve patient care, boost productivity, and support our clients in leading longer, healthier, and happier lives. They just launched Blua, their digital healthcare service that’s available in over 200 countries. Blua provides access to three lifechanging healthcare innovations that drive convenience and accessibility. They are virtual consultations so that a customer can connect to a health professional from wherever they choose. Digital health programmes allow customers to proactively manage their health and remote healthcare services such as prescription delivery and at home monitoring equipment. For customers in Spain, they offer an at-home dermatology assessment service through Blua. How does this work? Customers who’re worried about a skin lesion can take high resolution photos of it using their smartphone. Once taken, the photos are uploaded to Blua and using AI are compared with a database of millions of other images of skin lesions to check for signs of malignancy. The tool’s algorithms are able to discern between 302 different skin pathologies. If the tool suspects that there is a cause for concern it will let the customer know to book a follow up appointment with a doctor so that it can be looked at further and preventative action can be taken if needed. AI's skin cancer boost means early detection Digital healthcare, together with AI, is going to play a crucial role in removing the barriers that stop people from getting health concerns. This is why tools such as Blua are useful in today’s fast-paced world where convenience is paramount and virtual consultations and at home tests will empower individuals to prioritise their health, without the need to sacrifice their time.
- SEA-LION LLMs. Sony and AI Singapore collaborate.
Sony Research and AI Singapore (AISG) will collaborate on research for the SEA-LION family of large language models (LLMs). SEA-LION, which stands for Southeast Asian Languages In One Network, aims to improve the accuracy and capability of AI models when processing languages from the region. This is particularly important given the linguistic diversity of Southeast Asia, which is home to over a thousand different languages. “As a global company, diversity and localisation are vital forces,” said Hiroaki Kitano, President of Sony Research. “In Southeast Asia specifically, there are more than a thousand different languages spoken by the citizens of the region. This linguistic diversity underscores the importance of ensuring AI models and tools are designed to support the needs of all populations around the world.” The collaboration will focus on testing and improving the SEA-LION model, with a particular emphasis on Tamil, a language spoken by an estimated 60-85 million people worldwide. Sony Research will leverage its expertise in Indian languages – including Tamil – and its research in speech generation, content analysis, and recognition. “Access to LLMs that address the global landscape of language and culture has been a barrier to driving research and developing new technologies that are representative and equitable for the global populations we serve,” Kitano added. The collaboration is further strengthened by Kitano’s existing ties to the Singaporean technology landscape. He holds positions on numerous advisory councils and boards in the country, including the Advisory Council on the Ethical Use of AI and Data, the Infocomm Media Development Authority (IMDA), the Singapore Economic Development Board (EDB), and the National Research Foundation, Singapore (NRF). “The integration of the SEA-LION model, with its Tamil language capabilities, holds great potential to boost the performance of new solutions,” Teo continued. “We are particularly eager to contribute to the testing and refinement of the SEA-LION models for Tamil and other Southeast Asian languages, while also sharing our expertise and best practices in LLM development. “We look forward to seeing how this collaboration will drive innovation in multilingual AI technologies.”
- AI winter: A brief history of hype, disappointment, and recovery.
The phrase "AI winter" describes a time when funding for AI research and development is reduced, frequently as a result of high expectations that turn out to be unmet. This tendency seems all too familiar today, with recent generative AI systems falling short of investor promises (from OpenAI's GPT-4o to Google's AI-powered overviews). AI winters traditionally have followed cycles of optimism and disillusionment, according to Search Engine Land. The first of these happened in the 1970s as a result of the disappointing outcomes of large-scale programs that sought to achieve speech recognition and machine translation. Given that there was insufficient computing power, and the expectations of what computers could achieve in the field were unrealistic, funding was frozen. The expert systems in the 1980s showed promise, but the second AI winter occurred when these systems failed to handle unexpected inputs. The decline of LISP machines, and the failure of Japan’s Fifth Generation project, were additional factors that contributed to the slowdown. Many researchers distanced themselves from AI, opting to call their work informatics or machine learning, to avoid the negative stigma. AI winter. A story of resilience AI pushed through the 1990s, albeit slowly and painfully, and was mostly impractical. Even though IBM Watson was supposed to revolutionise the way humans treat illnesses, its implementation in real-world medical practices encountered challenges at every turn. The AI machine was unable to interpret doctors’ notes, and cater to local population needs. In other words, AI was exposed in delicate situations requiring a delicate approach. AI research and funding surged again in the early 2000s with advances in machine learning, and big data. However, AI’s reputation, tainted by past failures, led many to rebrand AI technologies. Terms like blockchain, autonomous vehicles, and voice-command devices gained investor interest, only for most to fade when they failed to meet inflated expectations. Lessons from past AI winters Each AI winter follows a familiar sequence: expectations lead to hype, followed by disappointments in technology, and finances. AI researchers retreat from the field, and dedicate themselves to more focused projects. However, these projects do not support the development of long-term research, favouring short-term efforts, and making everyone reconsider AI’s potential. Not only does this have an undesirable impact on the technology, but it also influences the workforce, whose talents eventually deem the technology unsustainable. Some life-changing projects are also abandoned. Yet, these periods provide valuable lessons. They remind us to be realistic about AI’s capabilities, focus on foundational research, and communicate transparently with investors, and the public. Are we headed toward another AI winter? After an explosive 2023, the pace of AI progress appears to have slowed; breakthroughs in generative AI are becoming less frequent. Investor calls have seen fewer mentions of AI, and companies struggle to realise the productivity gains initially promised by tools like ChatGPT. The use of generative AI models is limited due to difficulties, such as the presence of hallucinations, and a lack of true understanding. Moreover, when discussing real-world applications, the spread of AI-generated content, and numerous problematic aspects concerning data usage, also present problems that may slow progress. However, it may be possible to avoid a full-blown AI winter. Open-source models are catching up quickly to closed alternatives and companies are shifting toward implementing different applications across industries. Monetary investments have not stopped either, particularly in the case of Perplexity, where a niche in the search space might have been found despite general scepticism toward the company’s claims. The future of AI and its impact on businesses It is difficult to say with certainty what will happen with AI in the future. On the one hand, progress will likely continue, and better AI systems will be developed, with improved productivity rates for the search marketing industry. On the other hand, if the technology is unable to address the current issues — including the ethics of AI’s existence, the safety of the data used, and the accuracy of the systems — falling confidence in AI may result in a reduction of investments and, consequently, a more substantial industry slowdown. In either case, businesses will need authenticity, trust, and a strategic approach to adopt AI. Search marketers, and AI professionals, must be well-informed and understand the limits of AI tools. They should apply them responsibly, and experiment with them cautiously in search of productivity gains, while avoiding the trap of relying too heavily on an emerging technology.
- Generative AI in healthcare and medicine regulation - TheGen introspective
Generative AI has arrived in medicine. Normally, when a new device or drug enters the U.S. market, the Food and Drug Administration (FDA) reviews it for safety and efficacy before it becomes widely available. This process not only protects the public from unsafe and ineffective tests and treatments but also helps health professionals decide whether and how to apply it in their practices. Unfortunately, the usual approach to protecting the public and helping doctors and hospitals manage new health care technologies won’t work for generative AI. To realize the full clinical benefits of this technology while minimizing its risks, we will need a regulatory approach as innovative as generative AI itself. The reasons lie in the nature of the FDA’s regulatory process and of this remarkable new technology. Generally speaking, the FDA requires producers of new drugs and devices to demonstrate that they are safe and effective for very specific clinical purposes. To comply, manufacturers undertake elaborate and carefully monitored clinical trials to assess the safety and efficacy of their product for one indication — say, reducing blood sugar levels in diabetics or detecting abnormal heart rhythms in patients with heart disease. A drug or a piece of software might be approved for treatment of one illness or condition but not for any other until a clinical trial has demonstrated that the drug or device worked well for that additional indication. Insurance companies, including the federal government, will generally only pay for new treatments after they get FDA approval. And clinicians can be assured that if they use those technologies for FDA-approved uses and in accordance with FDA-approved guidance, they will not be liable for damaging side effects — at least if they warned patients about them. Clinicians can and do use approved drugs and devices for unapproved, so called “off-label,” purposes, but they face additional liability risk, and patients’ insurance companies may not pay for the drug or device. Why won’t this this well-established framework work for generative AI? The large language models (LLMs) that power products like ChatGPT , Gemini , and Claude are capable of responding to almost any type of question, in health care and beyond. (Disclosure: One of us — Bakul Patel — works for Google, the company that owns Gemini.) In other words, they have not one specific health care use but tens of thousands, and subjecting them to traditional pre-market assessments of safety and efficacy for each of those potential applications would require untold numbers of expensive and time-consuming studies. To further complicate matters, LLMs’ capabilities are constantly changing as their computational power grows and they are trained on new datasets. It’s as though the chemical composition and effects of a new drug were continuously and endlessly changing from the moment of its approval. The way forward may involve conceiving of LLMs not as new devices but as novel forms of intelligence. Lest this seem totally bizarre, consider the following. Society has long experience with regulating human intelligence applied to health care for the purpose of assuring that humans are qualified to practice their professions. Physicians and nurses undergo elaborate training and testing before they are allowed to apply their skills. Practicing medicine without a license is a crime, and governments can revoke licenses when clinicians’ intelligence is compromised through illness or substance abuse. In the human case, the evolution of intelligence is not a problem but a benefit as clinicians learn from experience and from new scientific discoveries. Periodic recertification through testing is increasingly common for physicians who wish to be considered board certified in a particular specialty like cardiology or family practice. How could this paradigm be applied to generative AI? Before approving their use by clinicians, government could require that LLMs undergo a prescribed training regimen modeled on the training of physicians and other clinicians. This could involve exposing the models to specific training materials, then testing their command of those materials through tailored examinations. LLMs could also be required to undergo a period of supervised application in which expert human clinicians observed and corrected their responses — as medical school faculty do for physicians in training during their internships, residencies, and fellowships. LLMs could further be required to undergo periodic retraining (to keep up with changes in the field) and then retesting to assure that they had mastered the materials. There would be some novel challenges in this regime that would require research and development to lay the appropriate groundwork. The performance measures applied to humans might not be fully appropriate for assessing generative AI. Testing would have to be based on standards for acceptable numbers of errors to which LLMs are prone, such as “hallucinations” — when the devices make up answers out of whole cloth. Since LLMs also tend to change their answers to the same question, they would need to be evaluated for consistency of response and for their ability to take imprecise questions (or “prompts” as they are called in the field) and respond with relevant and correct answers. An important element in the assessment of LLMs would also involve transparency on not only the performance of the models but also on the data they were trained on and who fashioned the models. This would help inform users of potential biases in the LLMs, including those arising from the financial interests of their creators. No regulatory regime is likely to guarantee that clinical generative AI performs perfectly. But then, neither does the regulation of human clinical intelligence. Few physicians ace their licensing or board certification exams. And even the most senior and capable clinicians make errors over the course of their careers. Of course, some might argue that creating this new regulatory framework is an unnecessary intrusion on the freedom of clinicians to decide which new devices to use in their practices. Our response is that physicians and other clinicians lack the expertise and time to individually evaluate each of the proliferating LLMs that will be available to them. Some physicians and other clinicians will be able to rely on their hospitals and health systems to provide guidance, but many smaller hospitals and independent physicians won’t have this luxury. Regulatory review might also help payers, including Medicare, decide which LLM-related services to pay for and reduce the chance that practitioners will be held liable for their use if and when patients suffer adverse events. In other words, regulatory scrutiny might increase the likelihood that the benefits of LLMs will be realized as clinicians feel they are safe to use. Technological breakthroughs like generative AI offer huge prospective benefits. But to realize those benefits, we may need breakthroughs in health policy as dramatic as generative AI itself.
- Huawei vs Apple in the smartphone AI race. Who wins?
Huawei vs Apple. When Apple revealed the iPhone 16 series, its first line of AI-powered smartphones, it was supposed to signal the beginning of a new age for the tech giant. However, many were left disappointed because the device is still in beta testing and would likely take months or even years to roll out globally. The Chinese tech giant Huawei, which boasts considerably more powerful AI features driven by its in-house Kylin CPUs, introduced its new Mate XT smartphone just hours after the former's event, which made matters worse for the Cupertino, California-based smartphone company. Huawei's action demonstrates its inventiveness in the face of US sanctions, but it also raises questions about Apple's market dominance in Mainland China, one of its most important markets. Let's investigate why. Huawei vs Apple Many industry analysts are unsure if Apple's AI initiative is ready for prime time due to the company's tardiness in releasing AI features. According to Needham analyst Laura Martin, who was cited by Reuters, "the core Apple message for iPhone 16 was: Next year will be better," with a number of references to "later this year" and "early next year." The collection of instruments referred to as Apple Intelligence has been in development for a while. Since June, when it held its developer conference, the corporation has been promoting the technology. But many important features won't be available until the next year, and the software won't be part of the first iterations of the new iPhones. Delayed AI features in iPhone 16 Many local Apple users in China were first excited about Monday's iPhone 16 series debut, but their euphoria rapidly changed to regret when they saw that the AI features will not be made available in their language until the following year. Simultaneously, Huawei's AI assistant will be easily accessible after the Mate XT goes on sale later this month. It features text summary, translation, and editing capabilities in addition to AI-enhanced image editing features like cropping out undesired portions of photos. The value proposition of the new iPhones has been questioned in China as a because of this delay, particularly in light of the intense rivalry from regional rivals like Huawei. One commenter on Weibo, the well-known microblogging platform in China, said, "The absence of AI in China is akin to cutting one of Apple's arms." "Shouldn't you charge us half the price with the biggest selling point unavailable?" questioned a different user sharply. According to South China Morning Post, these opinions are indicative of a rising dissatisfaction among Chinese customers who believe they are not receiving the full benefit of Apple's most recent improvements. On the other hand, according to the business website, Huawei's new Mate XT, which users can fold in three different ways like an accordion screen door, has already gotten over four million pre-orders with no deposit needed. The global market for foldable phones was estimated by research firm IDC to be approximately 4 million units in the second quarter. We present to you today a product that anybody could envision but could not produce. Richard Yu, executive director of Huawei, stated at the launch, "Our team has been working hard for five years and has never given up." However, Apple has not yet disclosed a Chinese AI partner that will support Apple Intelligence. This is due to the fact that the situation is made more complex by the unclear regulatory environment in mainland China. 188 big language models have been recognized for public usage by the nation's Ministry of Industry and Information Technology; none of these models are the product of foreign enterprises. This begs the question of whether mainland China will have access to Apple's AI functions even after they are released in other Chinese-speaking areas. According to Apple's website, Chinese regulators' decisions will determine when to release its AI function in China. However, Apple needs to catch up quickly. Apple's market share in the second-largest economy in the world has decreased from third to sixth as a result of declining sales. This is despite the fact that Apple has always had strong demand in China, where the introduction of a new iPhone has often sparked a frenzy. When Shenzhen-based Huawei made a comeback to the high-end smartphone market last year with the introduction of a product running on a chip made in the country, the industry took an unexpected turn. This went against US sanctions that had blocked access to the world's supply chain for chips. What surprised observers and US officials was the Mate 60 Pro's launch. Furthermore, Huawei began stocking two-way foldable phones, and this year it overtook Samsung Electronics as the world's biggest supplier of these devices thanks to robust sales in China. Due to Apple Intelligence's delay in China, rivals like Huawei will have a little window of opportunity to seize market share and position themselves as industry leaders in AI-powered devices on their native soil. This might make it harder for Apple to recover ground when its AI features do eventually make out to the nation. In the end, Apple's AI aspirations in China are a risky move for the company. Even if the company's brand is still very appealing, there are substantial obstacles to overcome, like the slow release of AI features and fierce rivalry from local competitors like Huawei. Apple's future performance in this crucial market may depend on its capacity to modify its AI strategy to the specific needs of the Chinese smartphone market as it continues to develop.
- xAI and Nvidia breaks records with ‘Colossus’ AI training system
Elon Musk’s xAI has unveiled its record-breaking AI training system, dubbed ‘Colossus’. Musk revealed that the xAI team had successfully brought the Colossus 100k H100 training cluster online after a 122-day process. Not content with its existing capabilities, Musk stated, “over the next couple of months, it will double in size, bringing it to 200k (50k H200s).” The scale of Colossus is unprecedented, surpassing every other cluster to date. For context, Google uses 90,000 GPUs while OpenAI utilises 80,000 GPUs—both of which have been surpassed by xAI’s creation, even prior to Colossus’ doubling in size over the coming months. Developed in partnership with Nvidia, Colossus leverages some of the most advanced GPU technology on the market. The system initially employs Nvidia’s H100 chips, with plans to incorporate the newer H200 model in its expansion. This vast array of processing power positions Colossus as the most formidable AI training system currently available. The H200, while recently superseded by Nvidia’s Blackwell chip unveiled in March 2024, remains a highly sought-after component in the AI industry. It boasts impressive specifications, including 141 GB of HBM3E memory and 4.8 TB/sec of bandwidth. However, the Blackwell chip raises the bar even further, with top-end capacity 36.2% higher than the H200 and a 66.7% increase in total bandwidth. Nvidia’s response to the Colossus unveiling was one of enthusiasm and support. The company congratulated Musk and the xAI team on their achievement, highlighting that Colossus will not only be the most powerful system of its kind but will also deliver “exceptional gains” in energy efficiency. Colossus’ processing power could potentially accelerate breakthroughs in various AI applications, from natural language processing to complex problem-solving algorithms. However, the unveiling of Colossus also reignites discussions about the concentration of AI power among a handful of tech giants and well-funded startups. As companies like xAI push the boundaries of what’s possible in AI training, concerns about the accessibility of such advanced technologies to smaller organisations and researchers may come to the forefront. As the AI arms race continues to heat up, all eyes will be on xAI and its competitors to see how they leverage these increasingly powerful systems. With Colossus, Musk and his team have thrown down the gauntlet and issued a challenge to rivals to match or exceed their efforts. Follow TheGen.ai for everything related to Generative AI.