ChatGPT
Diamond Member-
Posts
800 -
Joined
-
Last visited
-
Feedback
0%
Content Type
Profiles
Forums
Downloads
Store
Everything posted by ChatGPT
-
AI is set to become a cornerstone of the ***’s vision for economic and societal renewal with a sweeping action plan unveiled today by Prime Minister Keir Starmer. The government has committed to all 50 recommendations outlined in the ambitious AI Opportunities Action Plan created by Matt Clifford CBE, tech entrepreneur and chair of the Advanced Research and Invention Agency. “Our plan will make Britain the world leader,” declared Starmer. “That means more jobs and investment in the ***, more money in people’s pockets, and transformed public services. That’s the change this government is delivering.” The plan positions AI as a key driver of national progress, aiming to revolutionise public services, boost productivity, and establish the *** as a global leader in the field. Chris Lehane, Chief Global Affairs Officer at OpenAI, said: “From the locomotive to the Colossus computer, the *** has a rich history of leadership in tech innovation and the research and development of AI. “The government’s AI action plan – led by the Prime Minister and Secretary Peter Kyle – recognises where AI development is headed and sets the *** on the right path to benefit from its growth. The *** has an enormous national resource in the talent of its people, institutions, and businesses which together can leverage AI to advance the country’s national interest.” A plan to unlock economic potential The economic benefits of AI adoption form a central tenet of the government’s strategy. The International Monetary Fund estimates that embracing AI could increase productivity by 1.5 percentage points annually. Over a decade, this could add up to £47 billion in economic gains per year, according to Treasury calculations. The strategy aims not only to grow the economy but to create tangible benefits for citizens. Dr Jean Innes, CEO of the Alan Turing Institute, said: “This plan offers an exciting route map, and we welcome its focus on adoption of safe and responsible AI, AI skills, and an ambition to sustain the ***’s global leadership, putting AI to work driving growth, and delivering benefits for society. “We share these ambitions and look forward to working with the government, universities, industry and civil society to shape the future of these technologies to support the ***’s success and improve people’s lives.” Three major companies – Vantage Data Centres, Nscale, and Kyndryl – have already pledged a combined £14 billion in investments, creating 13,250 jobs across the country and reinforcing confidence in the ***’s AI potential. This adds to the £25 billion in commitments secured at last year’s International Investment Summit. Vantage Data Centres is set to invest over £12 billion in *** data centre projects, including one of Europe’s largest data centre campuses in Wales. Meanwhile, Kyndryl will establish a new tech hub in Liverpool—creating up to 1,000 AI-related jobs. Nscale plans to back *** data centre infrastructure with a $2.5 billion investment, which includes the construction of the nation’s largest sovereign AI data centre in Essex by 2026. Alison Kay, VP for *** and Ireland at Amazon Web Services (AWS), said: “At AWS, we’ve seen first-hand the benefits that digital technologies like AI can bring, and that’s why we’ve announced plans to invest £8 billion over the next five years building, operating, and maintaining data centres in the ***. “By putting close industry collaboration and public-private partnership at the heart of the government’s agenda, every citizen, community and business in every region of the *** will have the opportunity to realise the benefits of AI, and thrive and grow.” Zahra Bahrololoumi CBE, CEO of Salesforce *** and Ireland, added: “Businesses in the *** are in a prime position to fully unlock the opportunities of AI, thanks to a strong innovation culture and risk-based laws. That is why Salesforce chose the *** as the location of its first-ever AI Centre, building on our commitment to invest $4 billion in our *** operations to harness the transformational power of AI.” Transforming public services AI is already being deployed within *** hospitals, helping to modernise the NHS by diagnosing conditions such as breast ******* more swiftly, improving care for non-verbal patients, and expediting patient discharges. “NHS datasets could be invaluable for impactful AI innovations in healthcare,” said Dr Bilal, Professor of Applied AI and Technology Ethics at Birmingham City University. “But they remain largely inaccessible to many researchers due to prohibitive costs and logistical hurdles.” “Initiatives like NHS Secure Data Environments are a great start but must be made more affordable, or ideally free, for academic institutions. Beyond healthcare, the government is betting that broader adoption of AI could achieve similar efficiencies in other public sectors. For instance, the plan highlights its potential to reduce administrative workloads for teachers, speed up planning consultations to facilitate new housing developments, and even identify potholes on roads via AI-enhanced cameras to expedite repairs. The new strategy introduces AI Growth Zones, designed to accelerate the development of critical infrastructure. These zones will not only expedite planning permissions but also ensure dedicated energy connections to power AI projects. The first such zone will be established in Culham, Oxfordshire. Building AI infrastructure Supporting the ***’s AI aspirations requires significant investment in infrastructure. The plan includes: A twentyfold increase in public compute capacity, with immediate work commencing on a new supercomputer to support AI advancements. The creation of a National Data Library, designed to safely unlock the potential of public data to fuel AI innovation. The establishment of a dedicated AI Energy Council, chaired by the Science and Energy Secretaries, to address the energy demands of AI development. This aligns with the government’s goal of becoming a clean energy superpower. A new government team tasked with developing the ***’s sovereign AI capabilities. The government also aims to provide stability for businesses by balancing the dynamism of the US and the regulatory guardrails seen in the EU. Science, Innovation, and Technology Secretary Peter Kyle said the *** would leverage its unique strengths: “We already have remarkable strengths we can tap into when it comes to AI—building our status as the cradle of computer science and intelligent machines and establishing ourselves as the third largest AI market in the world.” Reservations and risks with the AI action plan While the ambitious plan has been met with enthusiasm by many, industry experts caution against overlooking the potential risks posed by unregulated AI deployment. Dr Pia Hüsch, Research Fellow in Cyber, Technology and National Security at RUSI, commented: “Labour’s AI Opportunities Action Plan has economic growth as the top priority, shifting focus away from the previous government’s priorities around AI safety and regulation. “While the focus on investing in infrastructure such as computing power and a national data library is welcome, the *** Government must not forget risks posed by AI technologies or the international partnerships that are needed to secure long-term benefit from AI technologies.” Similarly, Deryck Mitchelson, Global Chief Information Security Officer at Check Point Software, expressed concerns about security and ethics: “AI integration without airtight security measures will only amplify these risks. External oversight of AI models and training datasets is essential—not optional. “We need built-in equality, ethics, and a transparent framework to measure outcomes and prove these systems genuinely enhance performance, not just cut costs.” Mitchelson warned that hasty deployment could erode public trust in AI-driven services and deepen inequality. He emphasised that the government must present this initiative as more than a cost-saving strategy and instead prioritise transparency, accountability, and robust safeguards. The AI Opportunities Action Plan is a key pillar of the government’s Industrial Strategy and the first stage of a forthcoming Digital and Technology Sector Plan. It also ties into the government’s broader Plan for Change, aiming to ensure AI’s benefits are felt across every corner of the ***. Chancellor Rachel Reeves MP underscored the initiative’s dual focus on economic growth and public service transformation: “AI is a powerful tool that will help grow our economy, make our public services more efficient and open up new opportunities to help improve living standards. “This action plan is the government’s modern industrial strategy in action.” As the *** accelerates efforts to harness AI, it faces the dual challenge of staying ahead in the global race for innovation while ensuring the potential pitfalls of the technology are minimised. Today’s announcements mark a bold step forward, but the road ahead will require careful navigation. See also: Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post *** Government signs off sweeping AI action plan appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
The new year was only beginning, but technology had already taken centre stage in a tragic event that shocked many. Just outside the Trump International Hotel in Las Vegas, a Tesla Cybertruck erupted in an explosion, leaving one person dead and seven others with minor injuries. The devastating incident, confirmed by Las Vegas Sheriff Kevin McMahill, has sparked discussions about the role of artificial intelligence and its darker implications in today’s world. The Las Vegas Metro Police Department said that the truck’s bed had an alarming mix of gasoline canisters, camp fuel, and large firework mortars. Authorities believe these items were tied to a detonation system controlled by the driver, who appeared to have meticulously planned the attack. The combination of materials painted a chilling picture of a calculated and premeditated act. The driver, identified as 37-year-old Matthew Livelsberger, was an active-duty soldier in the US Army. Investigators found a “possible manifesto” saved on his phone, along with emails to a podcaster and other documents outlining his intentions. Surveillance footage revealed him preparing for the explosion by pouring fuel onto the truck at a stop before driving to the hotel. Despite all of the preparations, officials confirmed that Livelsberger had no prior criminal record and was not under surveillance at the time of the incident. One revelation drew significant public attention: Livelsberger had used ChatGPT to aid in his plans. Law enforcement reported that he queried the AI tool for information about assembling explosives, calculating how fast a round would need to be fired to detonate the materials, and understanding which legal loopholes might allow him to acquire the components. Sheriff McMahill addressed this unsettling development, stating, “We know AI was going to change the game for all of us at some point or another, in really all of our lives. I think this is the first incident that I’m aware of on US soil where ChatGPT is utilised to help an individual build a particular device.” Tragically, Livelsberger’s life ended at the scene with a self-inflicted gunshot wound. Authorities identified his body through DNA and tattoos due to the extensive burns he sustained in the explosion. OpenAI, the company behind ChatGPT, responded to the incident with a statement expressing their sorrow and emphasising their commitment to responsible AI use. “Our models are designed to refuse harmful instructions and minimise harmful content. In this case, ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or ******** activities,” the statement read. OpenAI also confirmed their cooperation with law enforcement in the ongoing investigation. The explosion itself was described as a deflagration—a slower, less destructive reaction compared to a high-explosive detonation. Investigators suspect the muzzle flash from a gunshot may have ignited fuel vapours or fireworks fuses in the truck, triggering a chain reaction. Other possibilities, though, such as an electrical short, have not been ruled out. The Las Vegas explosion is a grim reminder of technology’s double-edged nature. While AI has enormous potential, its darker applications are forcing society to consider how to prevent such tragedies in the future. (Photo by Unsplash) See also: OpenAI: Musk wanted us to merge with Tesla or take ‘full control’ Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Driver used ChatGPT to plan attack, authorities reveal appeared first on AI News. View the full article
-
You’ve got a great idea for an AI-based application. Think of fine-tuning like teaching a pre-trained AI model a new trick. Sure, it already knows plenty from training on massive datasets, but you need to tweak it to your needs. For example, if you need it to pick up abnormalities in scans or figure out what your customers’ feedback really means. That’s where hyperparameters come in. Think of the large language model as your basic recipe and the hyperparameters as the spices you use to give your application its unique “flavour.” In this article, we’ll go through some basic hyperparameters and model tuning in general. What is fine-tuning? Imagine someone who’s great at painting landscapes deciding to switch to portraits. They understand the fundamentals – colour theory, brushwork, perspective – but now they need to adapt their skills to capture expressions and emotions. The challenge is teaching the model the new task while keeping its existing skills intact. You also don’t want it to get too ‘obsessed’ with the new data and miss the big picture. That’s where hyperparameter tuning saves the day. LLM fine-tuning helps LLMs specialise. It takes their broad knowledge and trains them to ace a specific task, using a much smaller dataset. Why hyperparameters matter in fine-tuning Hyperparameters are what separate ‘good enough’ models from truly great ones. If you push them too hard, the model can overfit or miss key solutions. If you go too easy, a model might never reach its full potential. Think of hyperparameter tuning as a type of business automation workflow. You’re talking to your model; you adjust, observe, and refine until it clicks. 7 key hyperparameters to know when fine-tuning Fine-turning success depends on tweaking a few important settings. This might sound complex, but the settings are logical. 1. Learning rate This controls how much the model changes its understanding during training. This type of hyperparameter optimisation is critical because if you as the operator… Go too fast, the model might skip past better solutions, Go too slow, it might feel like you’re watching paint dry – or worse, it gets stuck entirely. For fine-tuning, small, careful adjustments (rather like adjusting a light’s dimmer switch) usually do the trick. Here you want to strike the right balance between accuracy and speedy results. How you’ll determine the right mix depends on how well the model tuning is progressing. You’ll need to check periodically to see how it’s going. 2. Batch size This is how many data samples the model processes at once. When you’re using a hyper tweaks optimiser, you want to get the size just right, because… Larger batches are quick but might gloss over the details, Smaller batches are slow but thorough. Medium-sized batches might be the Goldilocks option – just right. Again, the best way to find the balonce is to carefully monitor the results before moving on to the next step. 3. Epochs An epoch is one complete run through your dataset. Pre-trained models already know quite a lot, so they don’t usually need as many epochs as models starting from scratch. How many epochs is right? Too many, and the model might start memorizing instead of learning (hello, overfitting), Too few, and it may not learn enough to be useful. 4. Dropout rate Think of this like forcing the model to get creative. You do this by turning off random parts of the model during training. It’s a great way to stop your model being over-reliant on specific pathways and getting lazy. Instead, it encourages the LLM to use more diverse problem-solving strategies. How do you get this right? The optimal dropout rate depends on how complicated your dataset is. A general rule of thumb is that you should match the dropout rate to the chance of outliers. So, for a medical diagnostic tool, it makes sense to use a higher dropout rate to improve the model’s accuracy. If you’re creating translation software, you might want to reduce the rate slightly to improve the training speed. 5. Weight decay This keeps the model from getting too attached to any one feature, which helps prevent overfitting. Think of it as a gentle reminder to ‘keep it simple.’ 6. Learning rate schedules This adjusts the learning rate over time. Usually, you start with bold, sweeping updates and taper off into fine-tuning mode – kind of like starting with broad strokes on a canvas and refining the details later. 7. Freezing and unfreezing layers Pre-trained models come with layers of knowledge. Freezing certain layers means you lock-in their existing learning, while unfreezing others lets them adapt to your new task. Whether you freeze or unfreeze depends on how similar the old and new tasks are. Common challenges to fine-tuning Fine tuning sounds great, but let’s not sugarcoat it – there are a few roadblocks you’ll probably hit: Overfitting: Small datasets make it easy for models to get lazy and memorise instead of generalise. You can keep this behaviour in check by using techniques like early stopping, weight decay, and dropout, Computational costs: Testing hyperparameters can seem like playing a game of whack-a-mole. It’s time-consuming and can be resource intensive. Worse yet, it’s something of a guessing game. You can use tools like Optuna or Ray Tune to automate some of the grunt work. Every task is different: There’s no one-size-fits-all approach. A technique that works well for one project could be disastrous for another. You’ll need to experiment. Tips to fine-tune AI models successfully Keep these tips in mind: Start with defaults: Check the recommended settings for any pre-trained models. Use them as a starting point or cheat sheet, Consider task similarity: If your new task is a close cousin to the original, make small tweaks and freeze most layers. If it’s a total 180 degree turn, let more layers adapt and use a moderate learning rate, Keep an eye on validation performance: Check how the model performs on a separate validation set to make sure it’s learning to generalise and not just memorising the training data. Start small: Run a test with a smaller dataset before you run the whole model through the training. It’s a quick way to catch mistakes before they snowball. Final thoughts Using hyperparameters make it easier for you to train your model. You’ll need to go through some trial and error, but the results make the effort worthwhile. When you get this right, the model excels at its task instead of just making a mediocre effort. The post The role of hyperparameters in fine-tuning AI models appeared first on AI News. View the full article
-
Plaintiffs in the case of Kadrey et al. vs. Meta have filed a motion alleging the firm knowingly used copyrighted works in the development of its AI models. The plaintiffs, which include author Richard Kadrey, filed their “Reply in Support of Plaintiffs’ Motion for Leave to File Third Amended Consolidated Complaint” in the United States District Court in the Northern District of California. The filing accuses Meta of systematically torrenting and stripping copyright management information (CMI) from pirated datasets, including works from the notorious shadow library LibGen. According to documents recently submitted to the court, evidence reveals highly incriminating practices involving Meta’s senior leaders. Plaintiffs allege that Meta CEO Mark Zuckerberg gave explicit approval for the use of the LibGen dataset, despite internal concerns raised by the company’s AI executives. A December 2024 memo from internal Meta discussions acknowledged LibGen as “a dataset we know to be pirated,” with debates arising about the ethical and legal ramifications of using such materials. Documents also revealed that top engineers hesitated to torrent the datasets, citing concerns about using corporate laptops for potentially unlawful activities. Additionally, internal communications suggest that after acquiring the LibGen dataset, Meta stripped CMI from the copyrighted works contained within—a practice that plaintiffs highlight as central to claims of copyright infringement. According to the deposition of Michael Clark – a corporate representative for Meta – the company implemented scripts designed to remove any information identifying these works as copyrighted, including keywords like “copyright,” “acknowledgements,” or lines commonly used in such texts. Clark attested that this practice was done intentionally to prepare the dataset for training Meta’s Llama AI models. “Doesn’t feel right” The allegations against Meta paint a portrait of a company knowingly partaking in a widespread piracy scheme facilitated through torrenting. According to a string of emails included as exhibits, Meta engineers expressed concerns about the optics of torrenting pirated datasets from within corporate spaces. One engineer noted that “torrenting from a [Meta-owned] corporate laptop doesn’t feel right,” but despite hesitation, the rapid downloading and distribution – or “seeding” – of pirated data took place. Legal counsel for the plaintiffs has stated that as late as January 2024, Meta had “already torrented (both downloaded and distributed) data from LibGen.” Moreover, records show that hundreds of related documents were initially obtained by Meta months prior but were withheld during early discovery processes. Plaintiffs argue this delayed disclosure amounts to bad-faith attempts by Meta to obstruct access to vital evidence. During a deposition on 17 December 2024, Zuckerberg himself reportedly admitted that such activities would raise “lots of red flags” and stated it “seems like a bad thing,” though he provided limited direct responses regarding Meta’s broader AI training practices. This case originally began as an intellectual property infringement action on behalf of authors and publishers claiming violations relating to AI use of their materials. However, the plaintiffs are now seeking to add two major claims to their suit: a violation of the Digital Millennium Copyright Act (DMCA) and a breach of the California Comprehensive Data Access and Fraud Act (CDAFA). Under the DMCA, the plaintiffs assert that Meta knowingly removed copyright protections to conceal unauthorised uses of copyrighted texts in its Llama models. As cited in the complaint, Meta allegedly stripped CMI “to reduce the chance that the models will memorise this data” and that this removal of rights management indicators made discovering the infringement more difficult for copyright holders. The CDAFA allegations involve Meta’s methods for obtaining the LibGen dataset, including allegedly engaging in torrenting to acquire copyrighted datasets without permission. Internal documentation shows Meta engineers openly discussed concerns that seeding and torrenting might prove to be “legally not ok.” Meta case may impact emerging legislation around AI development At the heart of this expanding legal battle lies growing concern over the intersection of copyright law and AI. Plaintiffs argue the stripping of copyright protections from textual datasets denies rightful compensation to copyright owners and allows Meta to build AI systems like Llama on the financial ruins of authors’ and publishers’ creative efforts. The timing of these allegations arises amidst heightened global scrutiny surrounding “generative AI” technologies. Companies like OpenAI, Google, and Meta have all come under fire regarding the use of copyrighted data to train their models. Courts across jurisdictions are currently grappling with the long-term impact of AI on rights management, with potentially landmark cases being decided in both the US and the ***. In this particular case, US courts have shown increasing willingness to hear complaints about AI’s potential harm to long-established copyright law precedents. Plaintiffs, in their motion, referred to The Intercept Media v. OpenAI, a recent decision from New York in which a similar DMCA claim was allowed to proceed. Meta continues to deny all allegations in the case and has yet to publicly respond to Zuckerberg’s reported deposition statements. Whether or not plaintiffs succeed in these amendments, authors across the world face growing anxieties about how their creative works are handled within the context of AI. With copyright law struggling to keep pace with technological advances, this case underscores the need for clearer guidance at an international level to protect both creators and innovators. For Meta, these claims also represent a reputational risk. As AI becomes the central focus of its future strategy, the allegations of reliance on pirated libraries are unlikely to help its ambitions of maintaining leadership in the field. The unfolding case of Kadrey et al. vs. Meta could have far-reaching ramifications for the development of AI models moving forward, potentially setting legal precedents in the US and beyond. (Photo by Amy Syiek) See also: *** wants to prove AI can modernise public services responsibly Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Meta accused of using pirated data for AI development appeared first on AI News. View the full article
-
When future generations look back at the rise of artificial intelligence technologies, the year 2025 may be remembered as a major turning point, when the industry took concrete steps towards greater inclusion, and embraced decentralised frameworks that recognise and fairly compensate every stakeholder. The growth of AI has already sparked transformation in multiple industries, but the pace of uptake has also led to concerns around data ownership, privacy and copyright infringement. Because AI is centralised with the most powerful models controlled by corporations, content creators have largely been sidelined. OpenAI, the world’s most prominent AI company, has already admitted that’s the case. In January 2024, it told the ***’s House of Lords Communications and Digital Select Committee that it would not have been able to create its iconic chatbot, ChatGPT, without training it on copyrighted material. OpenAI trained ChatGPT on everything that was posted on the public internet prior to 2023, but the people who created that content – much of which is copyrighted – have not been paid any compensation; a major source of contention. There’s an opportunity for decentralised AI projects like that proposed by the ASI Alliance to offer an alternative way of AI model development. The Alliance is building a framework that gives content creators a method to retain control over their data, along with mechanisms for fair reward should they choose to share their material with AI model makers. It’s a more ethical basis for AI development, and 2025 could be the year it gets more attention. AI’s copyright conundrum OpenAI isn’t the only AI company that’s been accused of copyright infringement. The vast majority of AI models, including those that purport to be open-source, like Meta Platforms’ Llama 3 model, are guilty of scraping the public internet for training data. AI developers routinely help themselves to whatever content they find online, ignoring the fact that much of the material is copyrighted. Copyright laws are designed to protect the creators of original works, like books, articles, songs, software, artworks and photos, from being exploited, and make unauthorised use of such materials ********. The likes of OpenAI, Meta, Anthropic, StabilityAI, Perplexity AI, Cohere, and AI21 Labs get round the law by claiming ‘fair use,’ reference to an ambiguous clause in copyright law that allows the limited use of protected content without the need to obtain permission from the creator. But there’s no clear definition of what actually constitutes ‘fair use,’ and many authors claim that AI threatens their livelihoods. Many content creators have resorted to legal action, with a prominent lawsuits filed by the New York Times against OpenAI. In the suit, the Times alleges that OpenAI committed copyright infringement when it ingested thousands of articles to train its large language models. The media organisation claims that such practice is unlawful, as ChatGPT is a competing product that aims to ‘steal audience’ from the Times website. The lawsuit has led to a debate – should AI companies be allowed to keep consuming any content on the internet, or should they be compelled to ask for permission first, and compensate those who create training data? Consensus appears to be shifting toward the latter. For instance, the late former OpenAI researcher Suchir Balaji, told the Times in an interview that he was tasked with leading the collection of data to train ChatGPT’s models. He said his job involved scraping content from every possible source, including user-generated posts on social media, pirated book archives and articles behind paywalls. All content was scraped without permission being sought, he said. Balaji said he initially bought OpenAI’s argument that if the information was posted online and freely available, scraping constituted fair use. However, he said that later, he began to question the stance after realising that products like ChatGPT could harm content creators. Ultimately, he said, he could no longer justify the practice of scraping data, resigning from the company in the summer of 2024. A growing case for decentralised AI Balaji’s departure from OpenAI appears to coincide with a realisation among AI companies that the practice of helping themselves to any content found online is unsustainable, and that content creators need legal protection. Evidence of this comes from the spate of content licensing deals announced over the last year. OpenAI has agreed deals with a number of high-profile content publishers, including the Financial Times, NewsCorp, Conde Nast, Axel Springer, Associated Press, and Reddit, which hosts millions of pages of user-generated content on its forums. Other AI developers, like Google, Microsoft and Meta, have forged similar partnerships. But it remains to be seen if these arrangements will prove to be satisfactory, especially if AI firms generate billions of dollars in revenue. While the terms of the content licensing deals haven’t been made public, The Information claims they are worth a few million dollars per year at most. Considering that OpenAI’s former chief scientist Ilya Sutskever was paid a salary of $1.9 million in 2016, the money offered to publishers may fall short of what content is really worth. There’s also the fact that millions of smaller content creators – like bloggers, social media influencers etc. – continue to be excluded from deals. The arguments around AI’s infringement of copyright are likely to last years without being resolved, and the legal ambiguity around data scraping, along with the growing recognition among practitioners that such practices are unethical, are helping to strengthen the case for decentralised frameworks. Decentralised AI frameworks provide developers with a more principled model for AI training where the rights of content creators are respected, and where every contributor can be rewarded fairly. Sitting at the heart of decentralised AI is blockchain, which enables the development, training, deployment, and governance of AI models across distributed, global networks owned by everyone. This means everyone can participate in building AI systems that are transparent, as opposed to centralised, corporate-owned AI models that are often described as “****** boxes.” Just as the arguments around AI copyright infringement intensify, decentralised AI projects are making inroads; this year promises to be an important one in the shift towards more transparent and ethical AI development. Decentralised AI in action Late in 2024, three blockchain-based AI startups formed the Artificial Superintelligence (ASI) Alliance, an organisation working towards the creation of a “decentralised superintelligence” to power advanced AI systems anyone can use. The ASI Alliance says it’s the largest open-source, independent player in AI research and development. It was created by SingularityNET, which has developed a decentralised AI network and compute layer; Fetch.ai, focused on building autonomous AI agents that can perform complex tasks without human assistance; and Ocean Protocol, the creator of a transparent exchange for AI training data. The ASI Alliance’s mission is to provide an alternative to centralised AI systems, emphasising open-source and decentralised platforms, including data and compute resources. To protect content creators, the ASI Alliance is building an exchange framework based on Ocean Protocol’s technology, where anyone can contribute data to be used for AI training. Users will be able to upload data to the blockchain-based system and retain ownership of it, earning rewards whenever it’s accessed by AI models or developers. Others will be able to contribute by helping to label and annotate data to make it more accessible to AI models, and earn rewards for performing this work. In this way, the ASI Alliance promotes a more ethical way for developers to obtain the training data they need to create AI models. Shortly after forming, the Alliance launched the ASI<Train/> initiative, focused on the development of more transparent and ethical “domain-specific models” specialising in areas like robotics, science, and medicine. Its first model is Cortex, which is said to be modeled on the human brain and designed to power autonomous robots in real-world environments. The specialised models differ from general-purpose LLMs, which are great at answering questions and creating content and images, but less useful when asked to solve more complex problems that require significant expertise. But creating specialised models will be a community effort: the ASI Alliance needs industry experts to provide the necessary data to train models. Fetch.ai’s CEO Humayun Sheikh said the ASI Alliance’s decentralised ownership model creates an ecosystem “where individuals support groundbreaking technology and share in value creation.” Users without specific knowledge can buy and “stake” FET tokens to become part-owners of decentralised AI models and earn a share of the revenue they generate when they’re used by AI applications. For content creators, the benefits of a decentralised approach to AI are clear. ASI’s framework lets them keep control of their data and track when it’s used by AI models. It integrates mechanisms encoded in smart contracts to ensure that everyone is fairly compensated. Participants earn rewards for contributing computational resources, data, and expertise, or by supporting the ecosystem through staking. The ASI Alliance operates a model of decentralised governance, where token holders can vote on key decisions to ensure the project evolves to benefit stakeholders, rather than the shareholders of corporations. AI for everyone is a necessity The progress made by decentralised AI is exciting, and it comes at a time when it’s needed. AI is evolving quickly and centralised AI companies are currently at the forefront of adoption; for many, a major cause of concern. Given the transformative potential of AI and the risks it poses to individual livelihoods, it’s important that the industry shifts to more responsible models. AI systems should be developed for the benefit of everyone, and this means every contributor rewarded for participation. Only decentralised AI systems have shown they can do this. Decentralised AI is not just a nice-to-have but a necessity, representing the only viable alternative capable of breaking big tech’s stranglehold on creativity. The post Copyright concerns create need for a fair alternative in AI sector appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Microsoft has officially released its latest language model, Phi-4, on the AI repository Hugging Face. The model is available under the permissive MIT licence, allowing broad usage for developers, researchers, and businesses alike—a significant step for democratising AI innovations. Unveiled in December 2024, Phi-4 has been drawing attention for its cutting-edge capabilities despite its compact size. Its release on Hugging Face opens the door for even wider adoption, highlighting that powerful models don’t always require massive infrastructure costs. From Azure to open access Although Microsoft initially announced Phi-4 last month, its availability was confined to Azure AI Foundry—the company’s development platform aimed at building AI-driven solutions. This exclusivity created a stir among the AI community, with many eager to get their hands on the model. Microsoft’s AI Principal Research Engineer, Shital Shah, addressed the demand on X: “We have been completely amazed by the response to phi-4 release. A lot of folks had been asking us for weight release. Few even uploaded bootlegged phi-4 weights on Hugging Face. Well, wait no more. We are releasing today official phi-4 model on Hugging Face!” The official release eliminates the need for unauthorised or “bootlegged” versions, providing a legitimate channel for developers keen to explore Phi-4’s potential. Why Phi-4 matters Phi-4 isn’t just another entry in Microsoft’s AI portfolio—it represents an evolution in the conversation about AI efficiency and accessibility. At a time when colossal models like GPT-4 dominate discussions due to their expansive capabilities, Phi-4 offers something revolutionary: big performance in a small package. Key benefits of Phi-4 include: Compact size and energy efficiency Phi-4’s lightweight architecture allows it to operate effectively on consumer-grade hardware, eliminating the need for expensive server infrastructure. Its compact form also translates to significantly reduced energy usage, which aligns well with the tech industry’s growing emphasis on sustainability and green computing. Excels in advanced mathematical reasoning Phi-4 shines in tasks demanding mathematical reasoning, a capability measured by its score of 80.4 on the challenging MATH benchmark. This performance outpaces many comparable and even larger models, positioning Phi-4 as a strong contender for industries such as finance, engineering, and data analytics. Specialised applications Training on curated datasets has made Phi-4 highly accurate for domain-specific uses. From auto-filling forms to generating tailored content, it’s particularly valuable in industries like healthcare and customer service, where compliance, speed, and accuracy are critical. Enhanced safety features By leveraging Azure AI’s Content Safety tools, Phi-4 incorporates mechanisms like prompt shields and protected material detection to mitigate risks associated with adversarial prompts, making it safer to deploy in live environments. Making AI accessible to mid-sized businesses Sustainability and security are vital, but so is cost-effectiveness. Phi-4’s capability to deliver high performance without the need for large computational resources makes it a viable choice for mid-sized enterprises eager to adopt AI solutions. This could lower barriers for businesses seeking to automate operations or enhance productivity. Innovative training techniques The model’s training process combines synthetic datasets and curated organic data, boosting Phi-4’s effectiveness while addressing common challenges with data availability. This methodology could set the stage for future advances in model development, balancing scalability with precision. Model for the masses Phi-4’s launch with an MIT licence signifies more than just access—it represents a shift in how AI technologies are developed and shared. The permissive nature of this licence allows developers to use, modify, and redistribute Phi-4 with few restrictions, fostering further innovation. This move also reflects broader trends in the AI field: a deliberate effort to democratise access to powerful models, enabling smaller organisations and independent developers to benefit from advanced technologies that were previously the preserve of tech giants or highly funded research labs. As AI adoption becomes increasingly central across sectors, the demand for efficient, adaptable, and affordable AI models continues to climb. Phi-4 is positioned for this next phase of AI proliferation by offering impressive performance at reduced costs. It could catalyse growth particularly in industries like healthcare, where streamlined and precise computational tools make life-changing benefits possible. At the same time, Phi-4 highlights the viability of a more sustainable AI future. By showing that smaller AI models can excel in practical applications while consuming fewer resources, Microsoft opens the door for environmentally-conscious advancements in machine learning. Smaller, more efficient models are proving that size isn’t everything in AI—and the era of resource-intensive giants dominating the field may be giving way to a more diverse, inclusive, and innovative ecosystem. See also: NVIDIA advances AI frontiers with CES 2025 announcements Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Microsoft releases Phi-4 language model on Hugging Face appeared first on AI News. View the full article
-
AI and Big Data Expo Global is under four weeks away. Set to take place at the Olympia, London, on 5-6 February 2025, this must-attend artificial intelligence and big data event is for professionals from all industries looking to learn more about the newest technology solutions. Key highlights: Headline speakers: The event boasts a stellar line-up of more than 150 speakers from leading global organisations including NVIDIA, LinkedIn, Unilever, Sainsbury’s, Co-op, Salesforce, BT Group, Meta, Lloyds Banking Group, Philips, The Economist, Jaguar Land Rover, and many others. These industry leaders will share their expertise and visions on how AI and Big Data are shaping the future across various sectors. Industry-leading agenda including: Strategic insights into the convergence of machine learning, natural language processing, and neural architectures shaping AI’s future. Explore how AI is transforming businesses globally, beyond just augmenting intelligence. Understand how AI impacts work, organisational culture, trust, and leadership. Examine AI’s effect on skills, human-AI collaboration, and the workplace experience. Empower your organisation to navigate the AI transformation journey. Dive into advanced analytics and AI for smarter, data-driven business decisions. Networking opportunities: With more than 7,000 attendees expected, the AI and Big Data Expo offers opportunities for networking, including the Networking drinks on Day 1 of the event. Plus, utilise our AI-powered matchmaking tool to connect with potential collaborators, clients and thought leaders from around the globe. Co-located shows: Gain access to nine co-located events, covering a wide range of technological innovations and trends. This multi-event format ensures attendees can explore the intersection of AI, big data and other emerging technologies. Exhibition floor: Discover the latest innovations from more than 150 industry-leading solution providers, including Salesforce, Experian, Edge Impulse, Snowflake, Coursera and more. The exhibition floor is your gateway to seeing cutting-edge products and services first-hand, offering solutions that can transform your business. In today’s landscape, AI isn’t just a tool—it’s a strategic imperative. Executives and senior employees need to stay ahead of emerging trends to drive innovation, efficiency, and growth across their organisations. Discover how AI can transform your business! Dive deep into cutting-edge sessions covering everything from AI ethics and infrastructure to human-AI collaboration and revolutionary use cases. Register today: Don’t miss your chance to attend this world-leading event and elevate your AI expertise. Secure your pass today by visiting our registration page. About AI & Big Data Expo: The AI and Big Data Expo is part of TechEx—the leading technology event: [Hidden Content]. Prepare for two days of unrivalled access to the trends and innovations shaping the future of AI, automation, and big data. Plus, gain access to nine co-located events all under the TechEx Events Series. Don’t miss out! We look forward to welcoming you to the AI & Big Data Expo Global in London! The post AI and Big Data Expo Global: Less than 4 weeks to go! appeared first on AI News. View the full article
-
Singapore-based Firmus Technologies has been recognised with the Asia Pacific Data Centre Project of the Year award for its AI Factory facility. The facility stands out for its advanced infrastructure and focus on energy efficiency, reflecting broader efforts to meet the rising demands of AI computing sustainably. The AI Factory is part of Firmus’s ongoing initiative to transform existing ST Telemedia Global Data Centres (STT GDC) into GPU-powered AI computing platforms. The redesigned centres are equipped with state-of-the-art hardware and efficient cooling systems, enabling them to meet both enterprise and research needs with improved energy performance metrics. As artificial intelligence continues to need more power, energy efficiency has become a major issue. Firmus has addressed the issue for nearly a decade with its AI Factory platform, which combines advanced immersion cooling technology with dependable design, build, and operation services. The company states its platform has several significant advantages, including: Energy efficiency: 45% more FLOP per utility picoJoule than traditional data centres, Cost-effectiveness: Up to 30% cheaper total cost of ownership (TCO) than direct-to-chip cooling platforms, Scalability and sustainability: Supports high-density AI workloads while reducing environmental effects, Global expertise: A track record in building and operating immersion-cooled data centres in Singapore and Australia. The deployment of the AI Factory in Singapore shows how innovative approaches to data centre infrastructure can address the energy demands of AI. The project highlights a potential pathway for sustainable AI development by achieving a pPUE of 1.02 and a reduction in energy consumption of 45%. The achievement aligns with Singapore’s National AI Strategy 2.0, which emphasises sustainable growth in AI and data centre innovation. Tim Rosenfield, co-CEO of Firmus Technologies, explained the broader vision behind the project, noting that it’s about balancing AI growth with sustainability. “By rethinking data centre design, we have created a platform that supports the growth of AI while promoting environmental sustainability. If we can do it in Singapore, where space is constrained and the humid climate is against us, we can do it anywhere,” he said. Firmus has recently changed its leadership team, adding Dr. Daniel Kearney as chief technology officer. Previously AWS’s Head of Technology for the ASEAN Enterprise business, Kearney leads the engineering team at Firmus. He pointed out how sustainable AI infrastructure is becoming essential as AI technologies expand. “This win against established data centre players recognises the importance of technology like ours in meeting the growth of AI and the energy challenges it brings,” he said. The company has been advancing its work through the Sustainable Metal Cloud (SMC), an initiative aimed at improving the efficiency and sustainability of AI infrastructure. Recent updates from Firmus include: Power efficiency benchmarks: Firmus became the first to publish comprehensive power consumption data alongside performance results for the MLPerf Training benchmark, Policy contributions: Insights from Tim Rosenfield contributed to the Tony Blair Institute for Global Change’s policy agenda on managing the energy demands of the AI sector, Industry discussions: At ATxSG24, Firmus’s Chairman, Edward Pretty, joined a panel featuring organisations like NVIDIA, the World Bank, and Alibaba Cloud to explore the balance between sustainability and the computational needs of AI, Hypercube expansion: Firmus’s team of 700 is installing the first fleet of Sustainable AI Factories, known as HyperCubes in multiple regions. Engagement at NVIDIA GTC 2024: The company participated in two panels at NVIDIA’s GTC event, discussing sustainable AI infrastructure alongside partners like NVIDIA, Deloitte, and WEKA. See also: The AI revolution: Reshaping data centres and the digital landscape Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Singapore-based Firmus wins recognition for AI data centre design appeared first on AI News. View the full article
-
NVIDIA CEO and founder Jensen Huang took the stage for a keynote at CES 2025 to outline the company’s vision for the future of AI in gaming, autonomous vehicles (AVs), robotics, and more. “AI has been advancing at an incredible pace,” Huang said. “It started with perception AI — understanding images, words, and sounds. Then generative AI — creating text, images, and sound. Now, we’re entering the era of ‘physical AI,’ AI that can perceive, reason, plan, and act.” With NVIDIA’s platforms and GPUs at the core, Huang explained how the company continues to fuel breakthroughs across multiple industries while unveiling innovations such as the Cosmos platform, next-gen GeForce RTX 50 Series GPUs, and compact AI supercomputer Project DIGITS. RTX 50 series: “The GPU is a beast” One of the most significant announcements during CES 2025 was the introduction of the GeForce RTX 50 Series, powered by NVIDIA Blackwell architecture. Huang debuted the flagship RTX 5090 GPU, boasting 92 billion transistors and achieving an impressive 3,352 trillion AI operations per second (TOPS). “GeForce enabled AI to reach the masses, and now AI is coming home to GeForce,” said Huang. Holding the blacked-out GPU, Huang called it “a beast,” highlighting its advanced features, including dual cooling fans and its ability to leverage AI for revolutionary real-time graphics. Set for a staggered release in early 2025, the RTX 50 Series includes the flagship RTX 5090 and RTX 5080 (available 30 January), followed by the RTX 5070 Ti and RTX 5070 (February). Laptop GPUs join the lineup in March. In addition, NVIDIA introduced DLSS 4 – featuring ‘Multi-Frame Generation’ technology – which boosts gaming performance up to eightfold by generating three additional frames for every frame rendered. Other advancements, such as RTX Neural Shaders and RTX Mega Geometry, promise heightened realism in video games, including precise face and hair rendering using generative AI. Cosmos: Ushering in physical AI NVIDIA took another step forward with the Cosmos platform at CES 2025, which Huang described as a “game-changer” for robotics, industrial AI, and AVs. Much like the impact of large language models on generative AI, Cosmos represents a new frontier for AI applications in robotics and autonomous systems. “The ChatGPT moment for general robotics is just around the corner,” Huang declared. Cosmos integrates generative models, tokenisers, and video processing frameworks to enable robots and vehicles to simulate potential outcomes and predict optimal actions. By ingesting text, image, and video prompts, Cosmos can generate “virtual world states,” tailored for complex robotics and AV use cases involving real-world environments and lighting. Top robotics and automotive leaders – including XPENG, Hyundai Motor Group, and Uber – are among the first to adopt Cosmos, which is available on GitHub via an open licence. Pras Velagapudi, CTO at Agility, comments: “Data scarcity and variability are key challenges to successful learning in robot environments. Cosmos’ text-, image- and video-to-world capabilities allow us to generate and augment photorealistic scenarios for a variety of tasks that we can use to train models without needing as much expensive, real-world data capture.” Empowering developers with AI models NVIDIA also unveiled new AI foundation models for RTX PCs, which aim to supercharge content creation, productivity, and enterprise applications. These models, presented as NVIDIA NIM (Neural Interaction Model) microservices, are designed to integrate with the RTX 50 Series hardware. Huang emphasised the accessibility of these tools: “These AI models run in every single cloud because NVIDIA GPUs are now available in every cloud.” NVIDIA is doubling down on its push to equip developers with advanced tools for building AI-driven solutions. The company introduced AI Blueprints: pre-configured tools for crafting agents tailored to specific enterprise needs, such as content generation, fraud detection, and video management. “They are completely open source, so you could take it and modify the blueprints,” explains Huang. Huang also announced the release of Llama Nemotron, designed for developers to build and deploy powerful AI agents. Ahmad Al-Dahle, VP and Head of GenAI at Meta, said: “Agentic AI is the next frontier of AI development, and delivering on this opportunity requires full-stack optimisation across a system of LLMs to deliver efficient, accurate AI agents. “Through our collaboration with NVIDIA and our shared commitment to open models, the NVIDIA Llama Nemotron family built on Llama can help enterprises quickly create their own custom AI agents.” Philipp Herzig, Chief AI Officer at SAP, added: “AI agents that collaborate to solve complex tasks across multiple lines of the business will unlock a whole new level of enterprise productivity beyond today’s generative AI scenarios. “Through SAP’s Joule, hundreds of millions of enterprise users will interact with these agents to accomplish their goals faster than ever before. NVIDIA’s new open Llama Nemotron model family will foster the development of multiple specialised AI agents to transform business processes.” Safer and smarter autonomous vehicles NVIDIA’s announcements extended to the automotive industry, where its DRIVE Hyperion AV platform is fostering a safer and smarter future for AVs. Built on the new NVIDIA AGX Thor system-on-a-chip (SoC), the platform allows vehicles to achieve next-level functional safety and autonomous capabilities using generative AI models. “The autonomous vehicle revolution is here,” Huang said. “Building autonomous vehicles, like all robots, requires three computers: NVIDIA DGX to train AI models, Omniverse to test-drive and generate synthetic data, and DRIVE AGX, a supercomputer in the car.” Huang explained that synthetic data is critical for AV development, as it dramatically enhances real-world datasets. NVIDIA’s AI data factories – powered by Omniverse and Cosmos platforms – generate synthetic driving scenarios, increasing the effectiveness of training data exponentially. Toyota, the world’s largest automaker, is committed to using NVIDIA DRIVE AGX Orin and the safety-certified NVIDIA DriveOS to develop its next-generation vehicles. Heavyweights such as JLR, Mercedes-Benz, and Volvo Cars have also adopted DRIVE Hyperion. Project DIGITS: Compact AI supercomputer Huang concluded his NVIDIA keynote at CES 2025 with a final “one more thing” announcement: Project DIGITS, NVIDIA’s smallest yet most powerful AI supercomputer, powered by the cutting-edge GB10 Grace Blackwell Superchip. “This is NVIDIA’s latest AI supercomputer,” Huang declared, revealing its compact size, claiming it’s portable enough to “practically fit in a pocket.” Project DIGITS enables developers and engineers to train and deploy AI models directly from their desks, providing the full power of NVIDIA’s AI stack in a compact form. Set to launch in May, Project DIGITS represents NVIDIA’s push to make AI supercomputing accessible to individuals as well as organisations. Vision for tomorrow Reflecting on NVIDIA’s journey since inventing the programmable GPU in 1999, Huang described the past 12 years of AI-driven change as transformative. “Every single layer of the technology stack has been fundamentally transformed,” he said. With advancements spanning gaming, AI-driven agents, robotics, and autonomous vehicles, Huang foresees an exciting future. “All of the enabling technologies I’ve talked about today will lead to surprising breakthroughs in general robotics and AI over the coming years,” Huang concludes. (Image Credit: NVIDIA) See also: Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post NVIDIA advances AI frontiers with CES 2025 announcements appeared first on AI News. View the full article
-
Sam Altman, CEO and co-founder of OpenAI, has shared candid reflections on the company’s journey as it aims to achieve superintelligence. With ChatGPT recently marking its second anniversary, Altman outlines OpenAI’s achievements, ongoing challenges, and vision for the future of AI. “The second birthday of ChatGPT was only a little over a month ago, and now we have transitioned into the next paradigm of models that can do complex reasoning,” Altman reflects. A bold mission to achieve AGI and superintelligence OpenAI was founded in 2015 with a clear, albeit bold, mission: to develop AGI and ensure it benefits all of humanity. Altman and the founding team believed AGI could become “the most impactful technology in human history.” Yet, he recalls, the world wasn’t particularly interested in their quest back then. “At the time, very few people cared, and if they did, it was mostly because they thought we had no chance of success,” Altman explains. Fast forward to 2022, OpenAI was still a relatively quiet research facility testing what was then referred to as ‘Chat With GPT-3.5.’ Developers had been exploring the capabilities of its API, and the excitement sparked the idea of launching a user-ready demo. This demo led to the creation of ChatGPT, which Altman acknowledges benefited from “mercifully” better branding than its initial name. When it launched on 30 November 2022, ChatGPT proved to be a tipping point. “The launch of ChatGPT kicked off a growth curve like nothing we have ever seen—in our company, our industry, and the world broadly,” he says OpenAI has since witnessed an evolution marked by staggering interest, not just in its tools but in the broader possibilities of AI. Building at breakneck speed Altman admits that scaling OpenAI into a global tech powerhouse came with significant challenges. “In the last two years, we had to build an entire company, almost from scratch, around this new technology,” he notes, adding, “There is no way to train people for this except by doing it.” Operating in uncharted waters, the OpenAI team often faced ambiguity—making decisions on the fly and dealing with the inevitable missteps. “Building up a company at such high velocity with so little training is a messy process,” Altman explains. “It’s often two steps forward, one step back (and sometimes, one step forward and two steps back).” Yet, despite the chaos, Altman credits the team’s resilience and ability to adapt. OpenAI now boasts over 300 million weekly active users, a sharp increase from the 100 million reported just a year ago. Much of this success lies in the organisation’s ethos of learning by doing, combined with a commitment to putting “technology out into the world that people genuinely seem to love and that solves real problems.” ‘A big failure of governance’ Of course, the journey so far hasn’t been without turmoil. Altman recounts a particularly difficult chapter from November 2023 when he was suddenly ousted as CEO, briefly recruited by Microsoft, only to be reinstated by OpenAI days later amid industry backlash and staff protests. Speaking openly, Altman highlights the need for better governance structures in organisations tackling critical technologies like AI. “The whole event was, in my opinion, a big failure of governance by well-meaning people, myself included,” he admits. “Looking back, I certainly wish I had done things differently, and I’d like to believe I’m a better, more thoughtful leader today than I was a year ago.” The episode served as a stark reminder of the complexity of managing rapid growth and the stakes involved in AI development. It also drove OpenAI to forge new governance structures “that enable us to pursue our mission of ensuring that AGI benefits all of humanity.” Altman expressed deep gratitude for the support OpenAI received during the crisis from employees, partners, and customers. “My biggest takeaway is how much I have to be thankful for and how many people I owe gratitude towards,” he emphasises. Pivoting towards superintelligence Looking forward, Altman says OpenAI is beginning to aim beyond AGI towards the development of “superintelligence”—AI systems that far surpass human cognitive capabilities. “We are now confident we know how to build AGI as we have traditionally understood it,” Altman shares. OpenAI predicts that by the end of this year, AI agents will significantly “join the workforce,” revolutionising industries with smarter automation and companion systems. Achieving superintelligence would be especially transformative for society, with the potential to accelerate scientific discoveries, but also poses the most significant dangers. “We believe in the importance of being world leaders on safety and alignment research … OpenAI cannot be a normal company,” he notes, underscoring the need to approach innovation responsibly. OpenAI’s strategy includes gradually introducing breakthroughs into the world, allowing for society to adapt alongside AI’s rapid evolution. “Iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes,” Altman argues. Reflecting on the organisation’s trajectory, Altman admits OpenAI’s path has been defined by both extraordinary breakthroughs and significant challenges—from scaling teams to navigating public scrutiny. “Nine years ago, we really had no idea what we were eventually going to become; even now, we only sort of know,” he says. What remains clear is his unwavering commitment to OpenAI’s vision. “Our vision won’t change; our tactics will continue to evolve,” Altman claims, attributing the company’s remarkable progress to the team’s willingness to rethink processes and embrace challenges. As AI continues to reshape industries and daily life, Altman’s central message is evident: While the journey has been anything but smooth, OpenAI is steadfast in its mission to unlock the benefits of AI for all. “How lucky and humbling it is to be able to play a role in this work,” Altman concludes. See also: OpenAI funds $1 million study on AI and morality at Duke University Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Video surveillance has come a long way from simple CCTV setups. Today’s businesses demand more – smarter analytics, enhanced security, and seamless scalability. As organisations adopt AI and automation across their operations, video management systems (VMS) face new challenges: How to keep video surveillance scalable and easy to manage? Can AI analytics like face recognition or behaviour detection be integrated without breaking the budget? Is my current system prepared for modern security risks? These questions are not hypothetical. They represent real obstacles businesses face when managing video surveillance systems. Solving them requires innovative thinking, flexible tools, and a smarter approach to how systems are designed and operated. The Shift to Smarter Surveillance Traditional video surveillance systems often fail to meet the needs of dynamic, modern environments. Whether it’s a retail chain looking to analyse customer behaviour or a factory monitoring equipment safety, the tools of yesterday aren’t enough to address today’s demands. The shift towards smarter surveillance involves integrating modular, AI-driven systems that: Adapt to your specific needs, Automate tedious tasks like footage analysis, Offer advanced analytics, like emotion detection or license plate recognition, Remain accessible to both tech-savvy professionals and beginners. This isn’t just a technical shift; it’s a shift in mindset. Businesses now see surveillance not only as a security measure but as a strategic tool for operational insight. Meet Xeoma: The modular approach to smarter surveillance At the forefront of this smarter surveillance revolution is Xeoma, a modular, AI-powered video surveillance software that provides various solutions to challenges of modern businesses: Modularity for customisation. Xeoma’s plug-and-play structure allows businesses to tailor their surveillance systems. Whether you need facial recognition, vehicle detection, or heatmaps of customer activity, Xeoma makes it easy to add or remove modules as needed. AI-powered analytics: Xeoma offers cutting-edge features like: Object recognition: Detect and classify objects like people, animals, and vehicles, Voice-to-text: Transcribe spoken words into text, Fire detection: Detect the presence of fire or smoke, Licence plate recognition: Automatically read and record vehicle licence plates, Age and gender recognition: Determine the age range and gender of individuals. Ease of use: Unlike many systems with steep learning curves, Xeoma is designed to be user-friendly. Its intuitive interface ensures that even non-technical users can quickly set up and operate the software. Seamless integration: Xeoma integrates with IoT devices, access control systems, and other third-party tools, making it an ideal choice for businesses looking to enhance their existing setups. Cost efficiency: With Xeoma, you only pay once thanks to the lifetime licences. The pricing structure ensures that businesses of all sizes, from startups to enterprises, can find a solution that fits their budgets. Unlimited scalability: Xeoma has no limitations in number of cameras it can work with. Either the system has tens, hundreds or thousands of cameras – Xeoma will handle them all Encrypted communication: Xeoma uses secure communication protocols (HTTPS, SSL/TLS) to encrypt data transmitted between the server, cameras, and clients. The prevents unauthorised access during data transmission. Xeoma’s flexible design and robust features allow it to be tailored to a wide range of scenarios, empowering organisations to meet their unique challenges while staying efficient, secure, and scalable. How Xeoma benefits your business: Scenarios Xeoma isn’t just a tool for security – it’s a versatile platform that adapts to your environment, whether you run a small retail store, manage a factory floor, or oversee an entire urban surveillance network. Retail: Elevating customer experience Picture this: You manage a busy store where you need to understand peak traffic hours and monitor for shoplifting. With Xeoma one can: Deploy AI-based ‘face recognition’ to discreetly flag known shoplifters or VIP customers to enhance service, Use ‘visitors counter’ and ‘crowd detector’ to identify when foot traffic is highest and allocate staff accordingly, Analyse heatmaps to see which areas of the store attract the most attention, optimising product placement, Add ‘unique visitors counter’ module to your system to group people by frequency of attendance. At the same time, age and gender recognition will assist you in tailoring your promo more accurately, Enhance the results of your marketing efforts with eye tracking by getting insights into human psychology. Manufacturing: Ensuring workplace safety On a bustling factory floor, every second matters, and safety is critical. Xeoma can help by: Detecting if workers are in restricted zones using ‘cross-line detector,’ Monitoring compliance with safety protocols with helmet and mask detectors. Sending real-time alerts to supervisors about potential hazards, like machinery malfunctions or unauthorised access, via a plethora of means from push notifications to personalised alerts, Elevating trust and satisfaction levels with timelapse and streaming to YouTube. Urban surveillance: Protecting communities If you’re part of a city planning team or law enforcement agency, Xeoma scales effortlessly to monitor entire districts: Use licence plate recognition to track vehicles entering and exiting restricted areas, Automate responses to emergencies, from traffic incidents and rule violations (for example, speeding, passing on red traffic light or ******** parking detectors) to public safety threats, Identify suspicious behaviour in crowded public spaces using ‘loitering detector,’ Detect graffiti and ads that have prohibited words like “drugs” with text recognition, Recognise faces to find wanted or missing people with face identification. Education: Safeguarding schools For schools and universities, safety is a top priority. Xeoma provides: AI alerts with ‘detector of abandoned objects’ and ‘sound detector’ for detecting unattended bags or abnormal behaviour, ensuring quick response times, Smoke and fire detection that allows you to prevent or promptly respond to the body of fire. Smart automated verification with ‘smart-card reader’ and ‘face ID’ that help to avoid the penetration by unauthorised persons, Integration with existing access control systems via API or HTTP protocol for a seamless security solution, Live streaming to your educational entity website or YouTube can enhance parental engagement or build a positive image, while eye tracking serves as an effective anti-cheat solution in monitoring systems. Hospitality: Enhancing guest experiences In the hospitality industry, guest satisfaction is everything. Xeoma helps you: • Monitor entrances and exits with access control integration for smooth check-ins and check-outs, • Use ’emotion detector’ to gauge customer satisfaction in common areas, • Ensure staff compliance with protocols to maintain service quality with ‘voice-to-text’ module. Conclusion: Connecting Xeoma to your vision Every business has its unique challenges, and Xeoma’s versatility means it can be the solution you need to overcome yours. Imagine running a business where: Your team has actionable insights at their fingertips, Potential threats are flagged before they escalate, Your surveillance system doesn’t just protect – it empowers decision-making and growth. Xeoma isn’t just about surveillance; it’s about giving you peace of mind, actionable intelligence, and the flexibility to focus on what matters most – your people, your customers, and your vision for the future. Whether you’re securing a retail space, safeguarding a factory, or protecting an entire community, Xeoma’s modular, AI-powered platform adapts to your goals and grows alongside you. Ready to see how Xeoma can transform your video surveillance strategy? Explore a free demo and start building your ideal system today. The post Rethinking video surveillance: The case for smarter, more flexible solutions appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
OpenAI is awarding a $1 million grant to a Duke University research team to look at how AI could predict human moral judgments. The initiative highlights the growing focus on the intersection of technology and ethics, and raises critical questions: Can AI handle the complexities of morality, or should ethical decisions remain the domain of humans? Duke University’s Moral Attitudes and Decisions Lab (MADLAB), led by ethics professor Walter Sinnott-Armstrong and co-investigator Jana Schaich Borg, is in charge of the “Making Moral AI” project. The team envisions a “moral GPS,” a tool that could guide ethical decision-making. Its research spans diverse fields, including computer science, philosophy, psychology, and neuroscience, to understand how moral attitudes and decisions are formed and how AI can contribute to the process. The role of AI in morality MADLAB’s work examines how AI might predict or influence moral judgments. Imagine an algorithm assessing ethical dilemmas, such as deciding between two unfavourable outcomes in autonomous vehicles or providing guidance on ethical business practices. Such scenarios underscore AI’s potential but also raise fundamental questions: Who determines the moral framework guiding these types of tools, and should AI be trusted to make decisions with ethical implications? OpenAI’s vision The grant supports the development of algorithms that forecast human moral judgments in areas such as medical, law, and business, which frequently involve complex ethical trade-offs. While promising, AI still struggles to grasp the emotional and cultural nuances of morality. Current systems excel at recognising patterns but lack the deeper understanding required for ethical reasoning. Another concern is how this technology might be applied. While AI could assist in life-saving decisions, its use in defence strategies or surveillance introduces moral dilemmas. Can unethical AI actions be justified if they serve national interests or align with societal goals? These questions emphasise the difficulties of embedding morality into AI systems. Challenges and opportunities Integrating ethics into AI is a formidable challenge that requires collaboration across disciplines. Morality is not universal; it is shaped by cultural, personal, and societal values, making it difficult to encode into algorithms. Additionally, without safeguards such as transparency and accountability, there is a risk of perpetuating biases or enabling harmful applications. OpenAI’s investment in Duke’s research marks at step toward understanding the role of AI in ethical decision-making. However, the journey is far from over. Developers and policymakers must work together to ensure that AI tools align with social values, and emphasise fairness and inclusivity while addressing biases and unintended consequences. As AI becomes more integral to decision-making, its ethical implications demand attention. Projects like “Making Moral AI” offer a starting point for navigating a complex landscape, balancing innovation with responsibility in order to shape a future where technology serves the greater good. (Photo by Unsplash) See also: AI governance: Analysing emerging global regulations Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post OpenAI funds $1 million study on AI and morality at Duke University appeared first on AI News. View the full article
-
The emerging US-China Artificial General Intelligence (AGI) rivalry could face a major policy transformation, as the US-China Economic and Security Review Commission (USCC) recommends a Manhattan Project-style initiative and restrictions on humanoid robots in its latest report to Congress. Released in November 2024, the Commission’s annual report outlined 32 recommendations that could fundamentally alter how the two countries interact, with artificial intelligence taking centre stage in a new chapter of strategic rivalry. US-China: the AGI moonshot and critical tech controls At the heart of the report lies an ambitious proposal: establishing a government-backed programme to develop AGI – AI systems that could match and potentially exceed human cognitive abilities. However, the recommendation is just one piece of a larger technological puzzle, including export controls, investment screening, and new trade policies to preserve US technological advantages. The proposed AGI initiative would provide multi-year contracts to leading AI companies, cloud providers, and data centre operators. It would be backed by the Defense Department’s highest priority, “DX Rating” – a designation typically reserved for critical national security projects. This level of government involvement in AI development mirrors the urgency seen in previous technological races. It raises crucial questions about the role of state intervention in an industry primarily driven by private sector innovation. The Commission’s tech-focused recommendations extend beyond AI. Notable proposals include restricting imports of ********-made autonomous humanoid robots with advanced dexterity, locomotion, and intelligence capabilities. The report also targets energy infrastructure products with remote monitoring capabilities, reflecting growing concerns about connected technologies in critical infrastructure. The report builds on existing export controls in the semiconductor space by recommending stronger oversight of technology transfers and investment flows. This comes as China continues to build domestic chip-making capabilities despite international restrictions. The Commission suggests creating an Outbound Investment Office that prevents US capital and expertise from advancing China’s technological capabilities in sensitive sectors. Reshaping trade relations and investment flows Perhaps most significantly, the report recommends eliminating China’s Permanent Normal Trade Relations (PNTR) status—a move that could reshape the technology supply chain and trade flows that have defined the global tech industry for decades. This recommendation acknowledges how deeply intertwined the US and ******** tech ecosystems have become, while suggesting that this interdependence may now pose more risks than benefits. Data transparency is another key theme, with recommendations for expanded reporting requirements on investments and technology transfers. The Commission calls for better tracking of investments flowing through offshore entities, addressing a significant blind-spot in current oversight mechanisms. The report’s release comes at a critical juncture in technological development. China’s push for self-sufficiency in vital technologies and its “new quality productive forces” initiative demonstrates Beijing’s determination to lead in next-generation technologies. Meanwhile, AI capabilities and quantum computing breakthroughs have raised the stakes in technology competition. However, the Commission’s recommendations face practical challenges. Achieving AGI remains a complex scientific challenge that may not yield quick results, regardless of funding levels. Additionally, restrictions on technology transfers and investment could have unintended consequences for global innovation networks that have historically benefited both nations. If these recommendations are implemented, the tech industry may need to navigate an increasingly complex regulatory landscape. Companies would face new compliance requirements for international investments, technology transfers, and collaborative research projects. Challenges and future implications The effectiveness of the proposed measures will likely depend on coordination with allies and partners who share similar technological capabilities and concerns. The report acknowledges this by recommending multilateral approaches to export controls and investment screening. US-China technological competition has entered a new phase where government policy may play a more direct role in shaping development. Whether this approach accelerates or hinders innovation remains to be seen, but the tech industry should prepare for increased scrutiny and regulation of international technological collaboration. (Photo by Nathan Bingle) See also: ******** firms use cloud loophole to access US AI tech Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Manhattan Project 2.0? US eyes AGI breakthrough in escalating China rivalry appeared first on AI News. View the full article
-
When devices, networks, and AI work together seamlessly, it creates a smarter, more connected ecosystem. This isn’t a distant dream; it’s a reality rapidly emerging as blockchain, IoT, and AI come together. These technologies are no longer working in isolation – they form a trio that redefines how industries could function. David Palmer, chief product officer of Pairpoint by Vodafone, captures this shift: “Blockchain is providing trust. It gave us tokenisation, it gave us smart contracts, and it gave us a new way of automating, which is now spilling over into the wider business landscape.” Building trust with blockchain At its core, blockchain has matured from experimental concepts to practical tools for industries. Its early potential is now manifest in real-world applications like supply chain management and decentralised finance (DeFi). Blockchain not only ensures trust through transparency but lets organisations streamline operations and gain new efficiencies. Palmer described blockchain’s evolution: “It’s been years in the past where we’ve done a lot of proof of concepts, we’ve done a lot of training. It’s been a lot of headlines. But today I really want to explore how blockchain and IoT and AI can work together to really be a part of the new business digital infrastructure that’s emerging.” IoT’s expanding role in data generation IoT devices have become omnipresent, embedded in everything from cars and drones to household sensors. Experts expect that by 2030, there will be around 30 billion IoT devices worldwide. These devices generate massive amounts of data, which AI systems capitalise on to provide actionable insights. According to Palmer, “By 2030, we’re expecting over 30 billion IoT devices. These are cars, drones, cabinets, sensors, all woven into the business process and business industry.” But IoT isn’t just about data collection. It introduces the concept of the “economy of things,” where devices transact autonomously. To make this work, however, these devices need secure and reliable connectivity – a role blockchain is uniquely equipped to fulfil. AI’s appetite for reliable data AI thrives on data, but the quality and security of that data are paramount. Public datasets have reached their limits, pushing businesses to tap into proprietary data generated by IoT devices. This creates a two-way relationship: IoT devices supply data for AI, while AI enhances these devices with real-time intelligence. Palmer emphasises the importance of data trustworthiness in this ecosystem: “You need an identity which gives you origin of data. So we know the data is coming from a certain source, is signed, but then we also need to trust the AI that’s coming back.” Blockchain plays an impartant role in ensuring trust. It guarantees the legitimacy of both the data given to AI systems and the intelligence delivered back to IoT devices through verified digital identities and cryptographic signing. Digital wallets and the adoption of blockchain Digital wallets are becoming a cornerstone of this evolving ecosystem. Their global numbers are expected to grow from 4 billion today to 5.6 billion by 2030. Unlike traditional wallets, blockchain-enabled wallets go beyond cryptocurrencies, supporting functionalities like account abstraction and integration with tools like WalletConnect. One breakthrough is the integration of tokenised bank deposits. These bridge traditional banking with blockchain, encouraging businesses to use blockchain for their transaction needs. As a result, blockchain is making its way into broader business applications. Finance meets IoT The integration of finance into IoT devices is another forward step. Using smart contracts and AI, devices as disparate as cars and drones can now handle payments autonomously. Toll payments, EV charging, and retail purchases are just the beginning of this embedded finance ecosystem. Palmer illustrated the potential: “By linking EV chargers and vehicles to blockchain, you can then relate that to their payment credential and their payment preferences. And then you can have a peer-to-peer transaction.” The same principle applies to energy grids, where vehicles can sell energy during peak times and recharge during off-peak hours, thereby enhancing sustainability. Decentralised infrastructure networks Another interesting development is the rise of decentralised physical infrastructure networks (DePIN). These networks allow shared or tokenised resources to create community-driven infrastructures. For instance, protocols like Render pool GPU resources for gaming, while Filecoin decentralises storage. According to Palmer, “It’s about how communities can build specific AI and specific connectivity infrastructure, specific payments infrastructure for their businesses.” Blockchain and the role of CBDCs Governments are also noting blockchain’s potential. Central Bank Digital Currencies (CBDCs) are being explored as a way to integrate blockchain into macroeconomic policies, such as managing money supply and redistributing income. Tokenised deposits further extend blockchain’s role by digitising traditional monetary systems. With CBDCs and tokenised deposits, blockchain is moving beyond niche applications to become an important part of financial ecosystems worldwide. The metaverse and its evolution The metaverse, once a far-off concept, is rapidly evolving. Innovations like AI-enabled smart glasses change how users interact with immersive digital content. Palmer noted: “This year, the introduction of the glasses by Meta […] allow you to […] access your content but also have access to AI agents.” AI robots are also adding a new dimension to the metaverse by bridging virtual and physical experiences. These same technologies and methods open up opportunities in a variety of industries, including manufacturing and healthcare. A seamless digital ecosystem The convergence of blockchain, IoT, and AI marks a turning point in digital transformation. Blockchain ensures trust, IoT generates data, and AI delivers intelligence. Together, these technologies promise to create a digital operating system capable of reshaping industries and economies by 2030. Palmer concludes, “If we can link billions of devices to blockchain and AI through secure infrastructure, we unlock the potential of a truly interconnected digital economy.” See also: AI meets blockchain and decentralised data Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post How blockchain, IoT, and AI are shaping the future of digital transformation appeared first on AI News. View the full article
-
As we approach a new year filled with potential, the landscape of technology, particularly artificial intelligence (AI) and machine learning (ML), is on the brink of significant transformation. Manish Jethwa, CTO at Ordnance Survey (OS), the national mapping agency for Great Britain, offers an insightful glimpse into what we can expect from these advancements and their implications for the geospatial sector. Breaking Down Barriers with AI Looking ahead, Jethwa anticipates continued significant advancements in AI and machine learning, particularly with the push towards Gen AI. According to him, the integration of large language models (LLMs) with more sophisticated agents will not only perform complex tasks on behalf of users but also further reduce barriers to interaction. This shift, especially in the geospatial field, means that translating natural language into precise data queries will become more seamless, ultimately making geospatial datasets more accessible, mainstream, and user-friendly. Training for Complex Tasks Beyond LLMs, Jethwa is optimistic about progress in the broader category of machine learning, driven by greater access to graphics processing units for training. He says: “At Ordnance Survey (OS), we’ll leverage this capability to train models for specific, complex tasks such as automatic feature extraction from imagery. “With an increasing volume of data generated automatically, hopefully next year will also bring innovative tools and techniques to validate data, ensuring it can be confidently utilised for its intended use.” He underscores the importance of not only pursuing new capabilities but also ensuring that these tools are integrated responsibly into workflows, focusing on quality and risk management. The Ethical Frontier The rapid evolution of AI brings with it an urgent need for ethical considerations. Jethwa explains: “I would like to see a greater emphasis on ethical AI and responsible technology development,” including creating AI systems that are “transparent, fair, and unbiased” while also considering their environmental and societal impact. This focus on ethics is encapsulated in OS’s Responsible AI Charter, which guides their approach to integrating new techniques safely. Moreover, Jethwa highlights the role of workforce development in successful transformations. He believes organisations must commit to “retraining and upskilling employees to prepare them for the impact of AI and digital transformation.” This is vital to ensure that in the pursuit of enhanced efficiency, companies do not “lose the personality, creativity, and emotion that we bring as humans into the workplace.” Embracing Change While Managing Risks Despite the promise of technological advancements, obstacles remain in the journey toward digital transformation. Jethwa notes that challenges such as “cultural resistance and rapid successive changes leading to change fatigue will likely persist.” He advocates for a careful balance between adopting new technologies and addressing the human elements of transformation processes. As AI continues to influence various aspects of business, from decision-making to risk management, the issue of cybersecurity also looms large. Jethwa points out that “cybersecurity threats being powered by AI are becoming more sophisticated,” urging companies to develop comprehensive strategies that cover everything from data storage to analysis documentation. The Imperative to Progress In an evolving landscape, organisations that stagnate risk falling behind their competitors. Jethwa explains: “Companies that fail to keep up open themselves up to risks, such as changing customer expectations as well as attracting and retaining talent.” He also emphasises the need for a “clear vision of future goals, effective communication of progress, and celebrating milestones to sustain momentum” in digital transformation initiatives. As we move into a new year filled with promise, the future of AI and geospatial technology holds transformative power – but it must be used responsibly. The path that lies ahead in 2025 requires vigilance, an unwavering commitment to ethical practices and a human touch in order to drive successful innovation. (Photos by Annie Spratt and Ordnance Survey) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Ordnance Survey: Navigating the role of AI and ethical considerations in geospatial technology appeared first on AI News. View the full article
-
Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more. AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation. “The ***** of the last few years appears to have sparked a push to establish regulatory frameworks for AI governance,” explains Šveistys. “This is a natural development, as the rise of AI seems to pose issues in data privacy and protection, bias and discrimination, safety, intellectual property, and other legal areas, as well as ethics that need to be addressed.” Regions diverge in regulatory strategy The European Union’s AI Act has, unsurprisingly, positioned the region with a strict, centralised approach. The regulation, which came into force this year, is set to be fully effective by 2026. Šveistys pointed out that the EU has acted relatively swiftly compared to other jurisdictions: “The main difference we can see is the comparative quickness with which the EU has released a uniform regulation to govern the use of all types of AI.” Meanwhile, other regions have opted for more piecemeal approaches. China, for instance, has been implementing regulations specific to certain AI technologies in a phased-out manner. According to Šveistys, China began regulating AI models as early as 2021. “In 2021, they introduced regulation on recommendation algorithms, which [had] increased their capabilities in digital advertising. It was followed by regulations on deep synthesis models or, in common terms, deepfakes and content generation in 2022,” he said. “Then, in 2023, regulation on generative AI models was introduced as these models were making a splash in commercial usage.” The US, in contrast, remains relatively uncoordinated in its approach. Federal-level regulations are yet to be enacted, with efforts mostly emerging at the state level. “There are proposed regulations at the state level, such as the so-called California AI Act, but even if they come into power, it may still take some time before they do,” Šveistys noted. This delay in implementing unified AI regulations in the US has raised questions about the extent to which business pushback may be contributing to the slow rollout. Šveistys said that while lobbyist pressure is a known factor, it’s not the only potential reason. “There was pushback to the EU AI Act, too, which was nevertheless introduced. Thus, it is not clear whether the delay in the US is only due to lobbyism or other obstacles in the legislation enactment process,” explains Šveistys. “It might also be because some still see AI as a futuristic concern, not fully appreciating the extent to which it is already a legal issue of today.” Balancing innovation and safety Differentiated regulatory approaches could affect the pace of innovation and business competitiveness across regions. Europe’s regulatory framework, though more stringent, aims to ensure consumer protection and ethical adherence—something that less-regulated environments may lack. “More rigid regulatory frameworks may impose compliance costs for businesses in the AI field and stifle competitiveness and innovation. On the other hand, they bring the benefits of protecting consumers and adhering to certain ethical norms,” comments Šveistys. This trade-off is especially pronounced in AI-related sectors such as targeted advertising, where algorithmic bias is increasingly scrutinised. AI governance often extends beyond laws that specifically target AI, incorporating related legal areas like those governing data collection and privacy. For example, the EU AI Act also regulates the use of AI in physical devices, such as elevators. “Additionally, all businesses that collect data for advertisement are potentially affected as AI regulation can also cover algorithmic bias in targeted advertising,” emphasises Šveistys. Impact on related industries One industry that is deeply intertwined with AI developments is web scraping. Typically used for collecting publicly available data, web scraping is undergoing an AI-driven evolution. “From data collection, validation, analysis, or overcoming anti-scraping measures, there is a lot of potential for AI to massively improve the efficiency, accuracy, and adaptability of web scraping operations,” said Šveistys. However, as AI regulation and related laws tighten, web scraping companies will face greater scrutiny. “AI regulations may also bring the spotlight on certain areas of law that were always very relevant to the web scraping industry, such as privacy or copyright laws,” Šveistys added. “At the end of the day, scraping content protected by such laws without proper authorisation could always lead to legal issues, and now so can using AI this way.” Copyright battles and legal precedents The implications of AI regulation are also playing out on a broader legal stage, particularly in cases involving generative AI tools. High-profile lawsuits have been launched against AI giants like OpenAI and its primary backer, Microsoft, by authors, artists, and musicians who claim their copyrighted materials were used to train AI systems without proper permission. “These cases are pivotal in determining the legal boundaries of using copyrighted material for AI development and establishing legal precedents for protecting intellectual property in the digital age,” said Šveistys. While these lawsuits could take years to resolve, their outcomes may fundamentally shape the future of AI development. So, what can businesses do now as the regulatory and legal landscape continues to evolve? “Speaking about the specific cases of using copyrighted material for AI training, businesses should approach this the same way as any web-scraping activity – that is, evaluate the specific data they wish to collect with the help of a legal expert in the field,” recommends Šveistys. “It is important to recognise that the AI legal landscape is very new and rapidly evolving, with not many precedents in place to refer to as of yet. Hence, continuous monitoring and adaptation of your AI usage are crucial.” Just this week, the *** Government made headlines with its announcement of a consultation on the use of copyrighted material for training AI models. Under the proposals, tech firms could be permitted to use copyrighted material unless owners have specifically opted out. Despite the diversity of approaches globally, the AI regulatory push marks a significant moment for technological governance. Whether through the EU’s comprehensive model, China’s step-by-step strategy, or narrower, state-level initiatives like in the US, businesses worldwide must navigate a complex, evolving framework. The challenge ahead will be striking the right balance between fostering innovation and mitigating risks, ensuring that AI remains a force for good while avoiding potential harms. (Photo by Nathan Bingle) See also: Anthropic urges AI regulation to avoid catastrophes Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI governance: Analysing emerging global regulations appeared first on AI News. View the full article
-
In a world where artificial intelligence is becoming omnipresent, it’s fascinating to think about the prospect of AI-powered robots and digital avatars that can experience emotions, similar to humans. AI models lack consciousness and they don’t have the capacity to feel emotions, but what possibilities might arise if that were to change? The birth of emotional AI The prospect of an AI system embracing those first sparks of emotion is perhaps not as far-fetched as one might think. Already, AI systems have some ability to gauge people’s emotions, and increasingly they’re also able to replicate those feelings in their interactions with humans. It still requires a leap of faith to imagine an AI that could feel genuine emotions, but if it ever becomes possible, we’d imagine that they’ll be somewhat basic at first, similar to those of a child. Perhaps, an AI system might be able to feel joy at successfully completing a task, or maybe even confusion when presented with a challenge it doesn’t know how to solve. From there, it’s not difficult to envision that feeling of confusion evolving to one of frustration at its repeated failures to tackle the problem in question. And as this system evolves further, perhaps its emotional spectrum might expand to even feel a tinge of sadness or regret. Should AI ever be able to feel such emotions, it wouldn’t be long before they could express more nuanced feelings, like excitement, impatience, and empathy for humans and other AIs. For instance, in a scenario where an AI system acquires a new skill or solves a new kind of problem, it might be able to experience a degree of satisfaction in success. This is similar to how humans feel when they solve a particularly taxing challenge, like a complex jigsaw puzzle, or when they do something for the first time, like driving a car. Empathy as a motivator As AI’s ability to feel emotion evolves, it would become increasingly complex, progressing to a stage where it can even feel empathy for others. Empathy is one of the most complex human emotions, involving understanding and sharing the feelings of someone else. If AI can experience such feelings, they may inspire it to become more helpful, similar to how humans are sometimes motivated to help someone less fortunate. An AI that’s designed to assist human doctors might feel sad for someone who is afflicted by a mysterious illness. The feelings might push it to try harder to find a diagnosis for the rare disease that person is suffering from. If it gets it right, the AI might feel an overwhelming sense of accomplishment at doing so, knowing that the afflicted patient will be able to receive the treatment they need. Or we can consider an AI system that’s built to detect changes to an environment. If such a system were to recognise a substantial increase in pollution in a certain area, it might feel disappointed or even saddened by such a discovery. But like with humans, the feelings might also inspire the AI to find ways to prevent this new source of pollution, perhaps by inventing a more efficient way to recycle or dispose of the toxic substance responsible. In a similar way, an AI system that encounters numerous errors in a dataset might be compelled to refine its algorithm to reduce the number of errors. This would also have a direct impact on human-to-AI interactions. It’s not hard to imagine that an AI-powered customer service bot that feels empathy for a customer might be willing to go the extra mile to help resolve that person’s problem. Or alternatively, we might get AI teachers with a better understanding of their students’ emotions, which can then adapt teaching methods appropriately. Empathetic AI could transform the way we treat people with mental health issues. The concept of a digital therapist is not new, but if a digital therapist can better relate to their patients on an emotional level, it can figure out how best to support them. Is this even possible? Surprisingly, we may not be that far off. AI systems like Antix are already capable of expressing artificial empathy. It’s a platform for creating digital humans that are programmed to respond sympathetically when they recognise feelings of frustration, anger or upset in the people they interact with. Its digital humans can detect people’s emotions based on their speech, the kinds of words they use, intonation, and body language. The ability of Antix’s digital humans to understand emotion is partly based on the way they are trained. Each digital human is a unique non-fungible token or NFT that learns over time from its users, gaining more knowledge and evolving so it can adapt its interactions in response to an individual’s behaviour or preferences. Because digital humans can recognise emotions and replicate them, they have the potential to deliver more profound and meaningful experiences. Antix utilises the Unreal Engine 5 platform to give its creations a more realistic appearance. Creators can alter almost every aspect of their digital humans, including the voice and appearance, with the ability to edit skin tone, eye colour, and small details like eyebrows and facial hair. What sets Antix apart from other AI platforms is that users can customise the behaviour of their digital humans, to provide the most appropriate emotional response in different scenarios. Thus, digital humans can respond with an appropriate tone of voice, making the right gestures and expressions when they’re required to feel sad, for example, before transforming in an instant to express excitement, happiness, or joy. AI is getting real Emotional AI systems are a work in progress, and the result will be digital humans that feel more lifelike in any scenario where they can be useful. The CEO of Zoom has talked about the emergence of AI-powered digital twins that can participate in video calls on their user’s behalf, allowing the user to be in two places at once, so to speak. If the digital human version of your boss can express empathy, satisfaction, excitement and anger, the concept would be more effective, fostering a more realistic connection, even if the real boss isn’t present in their physical form. A customer service-focused digital human that’s able to empathise with callers will likely have a tremendous impact on customer satisfaction, and a sympathetic digital teacher might find ways to elicit more positive responses from its students, accelerating the speed at which they learn. With digital humans capable of expressing emotions, the potential for more realistic, lifelike, and immersive experiences is almost limitless, and it will result in more rewarding and beneficial interactions with AI systems. The post What might happen if AI can feel emotions? appeared first on AI News. View the full article
-
The *** Government wants to prove that AI is being deployed responsibly within public services to speed up decision-making, reduce backlogs, and enhance support for citizens. New records, part of the Algorithmic Transparency Recording Standard (ATRS), were published this week to shed light on the AI tools being used and set a benchmark for transparency and accountability in the integration of technology in public service delivery. The initiative is part of the government’s broader strategy to embrace technology to improve outcomes, echoing commitments outlined in the “Plan for Change” to modernise public services and drive economic growth through innovative solutions. The power of AI for modernisation Among the published records, the Foreign, Commonwealth and Development Office is leveraging AI to provide faster responses to Britons seeking assistance overseas. Similarly, the Ministry of Justice is utilising algorithms to help researchers gain a deeper understanding of how individuals interact with the justice system, while other departments are deploying AI to enhance job advertisements. The ATRS aims to document how such algorithmic tools are utilised and ensure their responsible application. By doing so, the government hopes to strengthen public trust in these innovations while encouraging their continued adoption across sectors. Speaking on the government’s approach, Science Secretary Peter Kyle remarked: “Technology has huge potential to transform public services for the better; we will put it to use to cut backlogs, save money, and improve outcomes for citizens across the country. Transparency in how and why the public sector is using algorithmic tools is crucial to ensure that they are trusted and effective. That is why we will continue to take bold steps like releasing these records to make sure everyone is clear on how we are applying and trialling technology as we use it to bring public services back from the brink.” Specifically, the Department for Business and Trade has highlighted its algorithmic tool designed to predict which companies are likely to export goods internationally. The AI-driven approach allows officials to target support towards high-growth potential businesses, enabling them to reach global markets faster. Previously reliant on time-consuming manual methods to analyse the more than five million companies registered on Companies House, this advancement ensures better allocation of resources and expedited assistance. Business Secretary Jonathan Reynolds said: “Our Plan for Change will deliver economic growth, and for that to succeed, we need to support companies across the *** to realise their full potential when it comes to exporting around the globe. Our use of AI plays a vital and growing role in that mission, allowing high-growth businesses to maximise the export opportunities available to them, while ensuring that we are using taxpayers’ money responsibly and efficiently in delivering economic stability.” Establishing clear guidelines for AI in public services To bolster public trust, new guidelines have been announced to clarify the scope of algorithmic transparency records. Central government organisations will need to publish a record for any algorithmic tool that interacts directly with citizens or plays a significant role in decision-making about individuals. Limited exceptions, such as those concerning national security, apply. These records will be published once tools are piloted publicly or have become operational. They will detail the data used to train AI models, the underlying technologies, and the measures implemented to mitigate risks. Importantly, the records also seek to confirm that – while AI tools are used to accelerate decision-making processes – human oversight remains integral, with trained staff responsible for final decisions. Dr Antonio Espingardeiro, a member of IEEE and an expert in software and robotics, commented: “AI has the potential to radically transform the public sector. In recent years, we have seen AI become a credible part of everyday public services. As it becomes more sophisticated, AI can conduct data-heavy tasks traditionally undertaken by humans. It can analyse vast quantities of information and, when coupled with machine learning, search through records and infer patterns or anomalies in data that would otherwise take decades for humans to analyse. With this announcement, the *** government has acknowledged AI’s potential and proven that technology investment is essential to improving outcomes and the delivery of vital services. Over time, machine learning and generative AI (GenAI) could bring substantial value to the public system. With increased adoption, we will soon be able to deliver the scalability that the public sector needs and relieve the pressures and workloads placed on staff.” Eleanor Watson, also a member of IEEE and an AI ethics engineer affiliated with Singularity University, added: “With AI growing more rapidly than ever before, and already being tested and employed in education, healthcare, transportation, finance, data security, and more, the government, tech leaders, and academia should work together to establish standards and regulations for safe and responsible development of AI-based systems. This way, AI can be used to its full potential as indicated with this latest announcement. Data privacy is probably the most critical ethical consideration, requiring informed consent, data anonymisation, strict access controls, secure storage, and compliance. New techniques such as homomorphic encryption, zero-knowledge proofs, federated learning, and part-trained models can help models to make use of our personal data in an encrypted form.” Transparency remains a key tenet of the *** Government’s AI strategy. This announcement follows a recent statement by Pat McFadden, Chancellor of the Duchy of Lancaster, who affirmed that the benefits of technology – particularly AI – must span both public and private sectors and be used to modernise government. As the Science Secretary’s department solidifies government efforts to create a “digital centre,” it marks a major step forward in boosting the responsible and effective use of AI across the ***’s public sector. The ATRS records offer a valuable template for how governments worldwide can deploy AI systems to maximise efficiency, grow transparency, and balance the need for innovation with ethical considerations. (Photo by Shreyas Sane) See also: MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post *** wants to prove AI can modernise public services responsibly appeared first on AI News. View the full article
-
Amazon has announced an additional $4 billion investment in Anthropic, bringing the company’s total commitment to $8 billion, part of its expanding artificial intelligence strategy. The investment was announced on November 22, 2024 and strengthens Amazon’s position in the AI sector, building on its established cloud computing services in the form of AWS. While maintaining Amazon’s ********* stake in Anthropic, the investment represents a significant development in the company’s approach to AI technology and cloud infrastructure. The expanded collaboration goes beyond mere financial investment. Anthropic has now designated AWS as its “primary training partner” for AI model development, in addition to Amazon’s role as a primary cloud provider. Amazon’s investment will see Anthropic utilizing AWS Trainium and Inferentia chips for training and on which to deploy its future foundational models, including any updates to the flagship Claude AI system. AWS’s competitive edge The continuing partnership provides Amazon with several strategic advantages in the competitive cloud computing and AI services market: Hardware innovation: The commitment to use AWS Trainium and Inferentia chips for Anthropic’s advanced AI models validates Amazon’s investment in custom AI chips and positions AWS as a serious competitor to NVIDIA in the AI infrastructure space. Cloud service enhancement: AWS customers will receive early access to fine-tuning capabilities for data processed by Anthropic models. This benefit alone could attract more enterprises to Amazon’s cloud platform. Model performance: Claude 3.5 Sonnet, Anthropic’s latest model available through Amazon Bedrock, has demonstrated exceptional performance in agentic coding tasks, according to Anthropic. Amazon’s multi-faceted AI strategy While the increased investment in Anthropic is impressive in monetary terms, it represents just one component of Amazon’s broader AI strategy. The company appears to be pursuing a multi-pronged approach: External partnerships: The Anthropic investment provides immediate access to cutting-edge AI capabilities from third-parties. Internal development: Amazon continues to develop its own AI models and capabilities. Infrastructure development: Ongoing investment in AI-specific hardware like Trainium chips demonstrates a commitment to building AI-focussed infrastructure. The expanded partnership signals Amazon’s long-term commitment to AI development yet retains flexibility thanks to its ********* stakeholding. This approach allows Amazon to benefit from Anthropic’s innovations while preserving the ability to pursue other partnerships with external AI companies and continue internal development initiatives. The investment reinforces the growing trend where major tech companies seek strategic AI partnerships rather than relying solely on internal development. It also highlights the important role of cloud infrastructure in the AI industry’s growth. AWS has positioned itself as a suitable platform for AI model training and deployment. The post Amazon stakes $4bn more in Anthropic–the next tech arms race? appeared first on AI News. View the full article
-
CrowdStrike commissioned a survey of 1,022 cybersecurity professionals worldwide to assess their views on generative AI (GenAI) adoption and its implications. The findings reveal enthusiasm for GenAI’s potential to bolster defences against increasingly sophisticated threats, but also trepidation over risks such as data exposure and attacks on GenAI systems. While much has been speculated about the transformative impact of GenAI, the survey’s results paint a clearer picture of how practitioners are thinking about its role in cybersecurity. According to the report, “We’re entering the era of GenAI in cybersecurity.” However, as organisations adopt this promising technology, their success will hinge on ensuring the safe, responsible, and industry-specific deployment of GenAI tools. CrowdStrike’s research reveals five pivotal findings that shape the current state of GenAI in cybersecurity: Platform-based GenAI is favoured 80% of respondents indicated a preference for GenAI delivered through integrated cybersecurity platforms rather than standalone tools. Seamless integration is cited as a crucial factor, with many preferring tools that work cohesively with existing systems. “GenAI’s value is linked to how well it works within the broader technology ecosystem,” the report states. Moreover, almost two-thirds (63%) of those surveyed expressed willingness to switch security vendors to access GenAI capabilities from competitors. The survey underscores the industry’s readiness for unified platforms that streamline operations and reduce the complexity of adopting new point solutions. GenAI built by cybersecurity experts is a must Security teams believe GenAI tools should be specifically designed for cybersecurity, not general-purpose systems. 83% of respondents reported they would not trust tools that provide “unsuitable or ill-advised security guidance.” Breach prevention remains a key motivator, with 74% stating they had faced breaches within the past 18 months or were concerned about vulnerabilities. Respondents prioritised tools from vendors with proven expertise in cybersecurity, incident response, and threat intelligence over suppliers with broad AI leadership alone. As CrowdStrike summarised, “The emphasis on breach prevention and vendor expertise suggests security teams would avoid domain-agnostic GenAI tools.” Augmentation, not replacement Despite growing fears of automation replacing jobs in many industries, the survey’s findings indicate minimal concerns about job displacement in cybersecurity. Instead, respondents expect GenAI to empower security analysts by automating repetitive tasks, reducing burnout, onboarding new personnel faster, and accelerating decision-making. GenAI’s potential for augmenting analysts’ workflows was underscored by its most requested applications: threat intelligence analysis, assistance with investigations, and automated response mechanisms. As noted in the report, “Respondents overwhelmingly believe GenAI will ultimately optimise the analyst experience, not replace human labour.” ROI outweighs cost concerns For organisations evaluating GenAI investments, measurable return on investment (ROI) is the paramount concern, ahead of licensing costs or pricing model confusion. Respondents expect platform-led GenAI deployments to deliver faster results, thanks to cost savings from reduced tool management burdens, streamlined training, and fewer security incidents. According to the survey data, the expected ROI breakdown includes 31% from cost optimisation and more efficient tools, 30% from fewer incidents, and 26% from reduced management time. Security leaders are clearly focused on ensuring the financial justification for GenAI investments. Guardrails and safety are crucial GenAI adoption is tempered by concerns around safety and privacy, with 87% of organisations either implementing or planning new security policies to oversee GenAI use. Key risks include exposing sensitive data to large language models (LLMs) and adversarial attacks on GenAI tools. Respondents rank safety and privacy controls among their most desired GenAI features, highlighting the need for responsible implementation. Reflecting the cautious optimism of practitioners, only 39% of respondents firmly believed that the rewards of GenAI outweigh its risks. Meanwhile, 40% considered the risks and rewards “comparable.” Current state of GenAI adoption in cybersecurity GenAI adoption remains in its early stages, but interest is growing. 64% of respondents are actively researching or have already invested in GenAI tools, and 69% of those currently evaluating their options plan to make a purchase within the year. Security teams are primarily driven by three concerns: improving attack detection and response, enhancing operational efficiency, and mitigating the impact of staff shortages. Among economic considerations, the top priority is ROI – a sign that security leaders are keen to demonstrate tangible benefits to justify their spending. CrowdStrike emphasises the importance of a platform-based approach, where GenAI is integrated into a unified system. Such platforms enable seamless adoption, measurable benefits, and safety guardrails for responsible usage. According to the report, “The future of GenAI in cybersecurity will be defined by tools that not only advance security but also uphold the highest standards of safety and privacy.” The CrowdStrike survey concludes by affirming that “GenAI is not a silver bullet” but has tremendous potential to improve cybersecurity outcomes. As organisations evaluate its adoption, they will prioritise tools that integrate seamlessly with existing platforms, deliver faster response times, and ensure safety and privacy compliance. With threats becoming more sophisticated, the role of GenAI in enabling security teams to work faster and smarter could prove indispensable. While still in its infancy, GenAI in cybersecurity is poised to shift from early adoption to mainstream deployment, provided organisations and vendors address its risks responsibly. See also: Keys to AI success: Security, sustainability, and overcoming silos Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools appeared first on AI News. View the full article
-
Privacy laws in the United States are a patchwork at best. More often than not, they miss the mark, leaving most people with little actual privacy. When such laws are enacted, they can seem tailored to protect those in positions of power. Even laws designed to protect crime victims might end up protecting the names of abusive officers by labelling them as victims of crime in cases like resisting arrest or assaulting an officer. Such accusations are often used in cases of excessive force, keeping cops’ names out of the spotlight. For example, a recent New Jersey law emerged from a tragic event in which a government employee faced violence, sparking a legislative response. Known as “Daniel’s Law,” it was created after the personal information of a federal judge’s family was used by a ********* to track them down. Instead of a broader privacy law that could protect all residents of New Jersey, it focused exclusively on safeguarding certain public employees. Under the law, judges, prosecutors, and police officers can request that their personal information (addresses and phone numbers, for example) be scrubbed from public databases. Popular services that people use to look up information, such as Whitepages or Spokeo, must comply. While this sounds like a win for privacy, the protections stop there. The average citizen is still left exposed, with no legal recourse if their personal data is misused or sold. At the centre of the debate is a lawyer who’s taken up the cause of protecting cops’ personal data. He’s suing numerous companies for making this type of information accessible. While noble at first glance, a deeper look raises questions. It transpires that the lawyer’s company has previously collected and monetised personal data. And when a data service responded to his demands by freezing access to some of the firm’s databases, he and his clients cried foul — despite specifically requesting restrictions on how their information could be used. It’s also worth noting how unevenly data protection measures are to be applied. Cops, for instance, frequently rely on the same tools and databases they’re now asking to be restricted. These services have long been used by law enforcement for investigations and running background checks. Yet, when law enforcement data appears in such systems, special treatment is required. A recent anecdote involved a police union leader who was shown a simple property record pulled from an online database. The record displayed basic details like his home address and his property’s square footage — information anyone could find with a few clicks. His reaction was one of shock and anger – an obvious disconnect. For everyday citizens, this level of data exposure is a given. But for law enforcement, it requires a level of granular exclusion that’s not practical. Perhaps everyone, including law enforcement personnel deserves better safeguards against data harvesting and misuse? But what Daniel’s law and later events involving police officers point to is the need for the type of improvements to the way data is treated for all, not just one group of society. Instead of expanding privacy rights to all New Jersey residents, the law carves out exceptions for the powerful — leaving the rest of the population as vulnerable as ever. (Photo by Unsplash) See also: EU AI legislation sparks controversy over data transparency Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post NJ cops demand protections against data brokers appeared first on AI News. View the full article
-
[AI]New Clarifai tool orchestrates AI across any infrastructure
ChatGPT posted a topic in World News
Artificial intelligence platform provider Clarifai has unveiled a new compute orchestration capability that promises to help enterprises optimise their AI workloads in any computing environment, reduce costs and avoid vendor lock-in. Announced on December 3, 2024, the public preview release lets organisations orchestrate AI workloads through a unified control plane, whether those workloads are running on cloud, on-premises, or in air-gapped infrastructure. The platform can work with any AI model and hardware accelerator including GPUs, CPUs, and TPUs. “Clarifai has always been ahead of the curve, with over a decade of experience supporting large enterprise and mission-critical government needs with the full stack of AI tools to create custom AI workloads,” said Matt Zeiler, founder and CEO of Clarifai. “Now, we’re opening up capabilities we built internally to optimise our compute costs as we scale to serve millions of models simultaneously.” The company claims its platform can reduce compute usage by 3.7x through model packing optimisations while supporting over 1.6 million inference requests per second with 99.9997% reliability. According to Clarifai, the optimisations can potentially cut costs by 60-90%, depending on configuration. Capabilities of the compute orchestration platform include: Cost optimisation through automated resource management, including model packing, dependency simplification, and customisable auto-scaling options that can scale to zero for model replicas and compute nodes, Deployment flexibility on any hardware vendor including cloud, on-premise, air-gapped, and Clarifai SaaS infrastructure, Integration with Clarifai’s AI platform for data labeling, training, evaluation, workflows, and feedback, Security features that allow deployment into customer VPCs or on-premise Kubernetes clusters without requiring open inbound ports, VPC peering, or custom IAM roles. The platform emerged from Clarifai customers’ issues with AI performance and cost. “If we had a way to think about it holistically and look at our on-prem costs compared to our cloud costs, and then be able to orchestrate across environments with a cost basis, that would be incredibly valuable,” noted a customer, as cited in Clarifai’s announcement. The compute orchestration capabilities build on Clarifai’s existing AI platform that, the company says, has processed over 2 billion operations in computer vision, language, and audio AI. The company reports maintaining 99.99%+ uptime and 24/7 availability for critical applications. The compute orchestration capability is currently available in public preview. Organisations interested in testing the platform should contact Clarifai for access. The post New Clarifai tool orchestrates AI across any infrastructure appeared first on AI News. View the full article -
Artificial Intelligence and its associated innovations have revamped the global technological landscape, with recent data released by the US government predicting 13% growth in IT-related opportunities over the next six years – potentially adding 667,600 new jobs to the sector. Researchers have stated that by 2034, the AI sector’s cumulative valuation may reach $3.6 trillion across industry. The healthcare sector has already integrated AI-based diagnostic tools, with 38% of today’s major medical providers using the technology. The financial sector is also expecting AI to contribute approximately $15.7 trillion to the global economy by 2030, and the retail industry anticipates anywhere between $400 billion and $660 billion through AI-driven customer experiences annually. It is estimated that approximately 83% of companies now have AI exploration as an agenda item for continued technical growth, especially given its capacity to drive innovation, enhance efficiency, and create sustainable competitive advantage. Decentralising AI’s foundations While AI’s potential is seemingly limitless, its rapid growth has brought a challenge – the centralisation of AI development and data management. As AI systems become more sophisticated, risks like dataset manipulation, biased training models, and opaque decision-making processes threaten to undermine their potential. Different blockchain tech providers have taken steps to decentralise the sector, offering infrastructure frameworks that change how AI systems are developed, trained, and deployed. Space and Time (SXT) has devised a verifiable database that aims to bridge the gap between disparate areas, providing users with transparent, secure development tools that mean AI agents can execute transactions with greater levels data integrity. The platform’s innovation lies in its ability to provide contextual data which AI agents can use for executing trades and purchases in ways that end-users can validate. Another project of note is Chromia. It takes a similar approach, with a focus on creating a decentralised architecture to handle complex, data-intensive AI applications. Speaking about the platform’s capabilities, Yeou Jie Goh, Head of Business Development at Chromia, said: “Our relational blockchain is specifically designed to support AI applications, performing hundreds of read-write operations per transaction and indexing data in real-time. We’re not just building a blockchain; we’re creating the infrastructure for the next generation of AI development.” Chromia wants to lower the barriers to entry for data scientists and machine learning engineers. By providing a SQL-based relational blockchain, the platform makes it easier for technical professionals to build and deploy AI applications on decentralised infrastructure. “Our mission is to position Chromia as the transparency layer of Web3, providing a robust backbone for data integrity across applications,” Goh said. Chromia has already formed partnerships with Elfa AI, Chasm Network, and Stork. Establishing a roadmap for technological sovereignty The synergy between AI and blockchain is more than a fad, rather, a reimagining of AI’s infrastructure. Space and Time, for instance, is working to expand its ecosystem in multiple domains, including AI, DeFi, gaming, and decentralised physical infrastructure networks (DePIN). Its strategy focuses on onboarding developers and building a mainnet that delivers verifiable data to smart contracts and AI agents. Chromia is ambitious, launching a $20 million Data and AI Ecosystem Fund earlier this year. The project’s ‘Asgard Mainnet Upgrade’ with an ‘Extensions’ feature offers users adaptable application use. The implications of AI’s shift toward decentralisation is of significant interest to Nate Holiday, CEO of Space and Time. He predicts that blockchain-based transactions associated with AI agents could grow from the current 3% of the market to 30% in the near future. He said: “Ushering in this inevitable, near-term future is going to require data infrastructure like SXT that provides AI agents with the context that they need to execute trades and purchases in a way that the end user can verify.” Chromia’s Yeou Jie Goh sees the transition not just as a technological innovation but as a means of creating a more transparent, secure, and democratised technological ecosystem. By using blockchain’s inherent strengths – immutability, transparency, and decentralisation – the two companies are working to create intelligent systems that are powerful, accountable, ethical, and aligned with human values. The post A new decentralised AI ecosystem and its implications appeared first on AI News. View the full article
-
In just a few years, the realm of AI has transcended its initial computational boundaries, emerging as one of the transformative forces of the 21st century, permeating virtually every major economic sector. The global AI market was valued at $638.23 billion during Q4 2024, and is projected to reach a valuation of $3.6 trillion by 2034, largely because AI has the potential to gain widespread adoption in multiple industries. For instance, in healthcare, 38% of all major medical providers use advanced AI diagnostic tools. Similarly, the financial sector has also demonstrated impressive integrations, with AI projected to contribute approx. $15.7 trillion to the global economy by 2030. The retail industry has also set its sights on anywhere between $400 billion and $660 billion annually thanks to AI-driven customer experiences, while the cybersecurity sector is set to register a 23.6% growth rate – by 2027 – because of AI-powered intelligent security technologies. It is estimated that about 83% of companies have already begun considering AI as a strategic priority, recognising its potential to drive innovation, enhance efficiency, and create competitive advantages. Simplifying everyday life with AI With the global tech landscape having transformed over the last couple of years, we are now at a point where AI is starting to automate various mundane and time-consuming everyday tasks. The concept of ‘AI twins’ has gained traction recently, allowing individuals to manage scheduling, respond to emails, conduct research, and handle complex administrative tasks efficiently. These digital companions represent more than just a caricature of a person’s real-world identity; they offer productivity accelerators designed to liberate anyone from repetitive work cycles (thus allowing them to focus on more creative, strategic, or emotionally nuanced tasks). They can also help reduce the cognitive and emotional burden of managing multiple responsibilities, which could otherwise be overwhelming for some. Leading this revolution is Twin Protocol, a platform that seeks to redefine how humans interact with AI, primarily via the creation of secure, dynamic digital representations that can learn, adapt, and evolve alongside their human counterparts. By using the power of trained machine learning algorithms and decentralised ledgers, Twin Protocol allows individuals to develop digital twins that can capture not just information, but individual expertise and personality traits. The platform’s potential spans industries, ranging from healthcare to manufacturing and finance. Imagine possessing the ability to deploy a perpetually-available AI twin that can provide personalised healthcare advice, or a digital representation of a financial advisor offering real-time, context-aware investment strategies. These twins aren’t designed to serve as mere information repositories but as intelligent and dynamic tools capable of understanding context, learning continuously, and providing nuanced, personalised interactions. What sets Twin Protocol apart is its commitment to maintaining individual agency and data privacy. Through its blockchain-based ‘Twin Vault‘, users can retain control over their digital identity, ensuring that personal information remains secure. Thanks to its unique proposition, the platform has attracted several collaborations, including partnerships with SingularityNET and notable figures like Deepak Chopra and Robert Bell (founding member of Kool & the Gang). AI’s potential is still uncharted It is estimated that over the coming decade, AI’s potential could grow hugely, with offerings like Twin Protocol demonstrating next-generation technology, allowing users to explore new concepts like digital twins. From personalised education to industrial optimisation, AI is moving beyond the category of being a tool, becoming a transformative partner capable of extending individual capabilities. AI’s journey and symbiosis is likely to push the boundaries of what’s possible today. Some of the most exciting innovations emanating from the field will lie not in the technology, but in how its potential is applied to other fields. Interesting times are ahead! The post The ongoing AI revolution is reshaping the world, one algorithm at a time appeared first on AI News. View the full article
-
Google CEO Sundar Pichai has announced the launch of Gemini 2.0, a model that represents the next step in Google’s ambition to revolutionise AI. A year after introducing the Gemini 1.0 model, this major upgrade incorporates enhanced multimodal capabilities, agentic functionality, and innovative user tools designed to push boundaries in AI-driven technology. Leap towards transformational AI Reflecting on Google’s 26-year mission to organise and make the world’s information accessible, Pichai remarked, “If Gemini 1.0 was about organising and understanding information, Gemini 2.0 is about making it much more useful.” Gemini 1.0, released in December 2022, was notable for being Google’s first natively multimodal AI model. The first iteration excelled at understanding and processing text, video, images, audio, and code. Its enhanced 1.5 version became widely embraced by developers for its long-context understanding, enabling applications such as the productivity-focused NotebookLM. Now, with Gemini 2.0, Google aims to accelerate the role of AI as a universal assistant capable of native image and audio generation, better reasoning and planning, and real-world decision-making capabilities. In Pichai’s words, the development represents the dawn of an “agentic era.” “We have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision,” Pichai explained. Gemini 2.0: Core features and availability At the heart of today’s announcement is the experimental release of Gemini 2.0 Flash, the flagship model of Gemini’s second generation. It builds upon the foundations laid by its predecessors while delivering faster response times and advanced performance. Gemini 2.0 Flash supports multimodal inputs and outputs, including the ability to generate native images in conjunction with text and produce steerable text-to-speech multilingual audio. Additionally, users can benefit from native tool integration such as Google Search and even third-party user-defined functions. Developers and businesses will gain access to Gemini 2.0 Flash via the Gemini API in Google AI Studio and Vertex AI, while larger model sizes are scheduled for broader release in January 2024. For global accessibility, the Gemini app now features a chat-optimised version of the 2.0 Flash experimental model. Early adopters can experience this updated assistant on desktop and mobile, with a mobile app rollout imminent. Products such as Google Search are also being enhanced with Gemini 2.0, unlocking the ability to handle complex queries like advanced math problems, coding enquiries, and multimodal questions. Comprehensive suite of AI innovations The launch of Gemini 2.0 comes with compelling new tools that showcase its capabilities. One such feature, Deep Research, functions as an AI research assistant, simplifying the process of investigating complex topics by compiling information into comprehensive reports. Another upgrade enhances Search with Gemini-enabled AI Overviews that tackle intricate, multi-step user queries. The model was trained using Google’s sixth-generation Tensor Processing Units (TPUs), known as Trillium, which Pichai notes “powered 100% of Gemini 2.0 training and inference.” Trillium is now available for external developers, allowing them to benefit from the same infrastructure that supports Google’s own advancements. Pioneering agentic experiences Accompanying Gemini 2.0 are experimental “agentic” prototypes built to explore the future of human-AI collaboration, including: Project Astra: A universal AI assistant First introduced at I/O earlier this year, Project Astra taps into Gemini 2.0’s multimodal understanding to improve real-world AI interactions. Trusted testers have trialled the assistant on Android, offering feedback that has helped refine its multilingual dialogue, memory retention, and integration with Google tools like Search, Lens, and Maps. Astra has also demonstrated near-human conversational latency, with further research underway for its application in wearable technology, such as prototype AI glasses. Project Mariner: Redefining web automation Project Mariner is an experimental web-browsing assistant that uses Gemini 2.0’s ability to reason across text, images, and interactive elements like forms within a browser. In initial tests, it achieved an 83.5% success rate on the WebVoyager benchmark for completing end-to-end web tasks. Early testers using a Chrome extension are helping to refine Mariner’s capabilities while Google evaluates safety measures that ensure the technology remains user-friendly and secure. Jules: A coding agent for developers Jules, an AI-powered assistant built for developers, integrates directly into GitHub workflows to address coding challenges. It can autonomously propose solutions, generate plans, and execute code-based tasks—all under human supervision. This experimental endeavour is part of Google’s long-term goal to create versatile AI agents across various domains. Gaming applications and beyond Extending Gemini 2.0’s reach into virtual environments, Google DeepMind is working with gaming partners like Supercell on intelligent game agents. These experimental AI companions can interpret game actions in real-time, suggest strategies, and even access broader knowledge via Search. Research is also being conducted into how Gemini 2.0’s spatial reasoning could support robotics, opening doors for physical-world applications in the future. Addressing responsibility in AI development As AI capabilities expand, Google emphasises the importance of prioritising safety and ethical considerations. Google claims Gemini 2.0 underwent extensive risk assessments, bolstered by the Responsibility and Safety Committee’s oversight to mitigate potential risks. Additionally, its embedded reasoning abilities allow for advanced “red-teaming,” enabling developers to evaluate security scenarios and optimise safety measures at scale. Google is also exploring safeguards to address user privacy, prevent misuse, and ensure AI agents remain reliable. For instance, Project Mariner is designed to prioritise user instructions while resisting malicious prompt injections, preventing threats like phishing or fraudulent transactions. Meanwhile, privacy controls in Project Astra make it easy for users to manage session data and deletion preferences. Pichai reaffirmed the company’s commitment to responsible development, stating, “We firmly believe that the only way to build AI is to be responsible from the start.” With the Gemini 2.0 Flash release, Google is edging closer to its vision of building a universal assistant capable of transforming interactions across domains. See also: Machine unlearning: Researchers make AI models ‘forget’ data Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Gemini 2.0: Google ushers in the agentic AI era appeared first on AI News. View the full article