Discover how chatbots for marketing can boost your ROI with enhanced engagement and instant customer responses.
What are chatbots?
Chatbots are automated software applications designed to simulate human conversation. They interact with users through text or voice, providing immediate responses and performing various tasks. AI chatbots can understand and process natural language, enabling them to handle complex queries and provide relevant information or services.
Chatbots come in various forms, including:
Rule-based chatbots: Respond to specific commands predetermined by developers,
AI-driven chatbots: Use machine learning and natural language processing (NLP) to understand and adapt to user queries.
The importance of chatbots in marketing
Chatbots have become an essential component in modern marketing strategies. They offer marketers a way to connect with consumers efficiently, enhance customer experience, and streamline interactions.
Enhanced customer engagement: Chatbots engage customers by providing instant responses and personalised interactions,
24/7 availability: They operate around the clock, ensuring customer inquiries are addressed at any time,
Cost-effectiveness: Reduce the need for extensive human customer support, lowering operational costs,
Data collection: Gather valuable customer data and insights for better marketing strategies,
Increased lead generation: Facilitate the collection of leads by interacting with potential customers and capturing their information.
Chatbots play a crucial role in improving customer engagement. They provide a dynamic and interactive way for businesses to communicate with their audience, fostering stronger relationships and increasing satisfaction. Chatbots can quickly address common questions, offer recommendations, and guide customers through the purchasing process, creating a more personalised experience.
One of the most valuable features of chatbots is their ability to operate around the clock. Unlike human support teams, chatbots are available 24/7, ensuring customers receive immediate assistance regardless of the time of day. Continuous availability can significantly enhance the customer experience, leading to higher levels of satisfaction and loyalty.
The implementation of chatbots in marketing strategies not only streamlines operations but also delivers a more consistent and fulfilling customer experience. The result is an increase in engagement, satisfaction, and overall effectiveness in marketing campaigns.
Implementing chatbots in marketing strategies
Integrating chatbots into marketing strategies can significantly enhance customer engagement and streamline overall marketing efforts. This section delves into how chatbots can be used for personalised customer interactions and how they facilitate data collection and analysis.
Chatbots offer a unique opportunity to create personalised interactions with customers. By using AI and machine learning, chatbots can tailor responses based on user behaviours, preferences, and past interactions. The personalised approach can make customers feel valued and understood, increasing their satisfaction and loyalty.
Key aspects of personalised customer interactions:
Greeting users by name: Addressing customers by their names makes interactions feel more personal and engaging,
Tailored recommendations: Based on previous interactions, chatbots can recommend products, content, or services that align with individual preferences,
Customising responses: Chatbots can adjust their responses based on the user’s mood, choices, and patterns.
Chatbots are important in gathering and analysing customer data. The interactions between users and chatbots generate valuable insights that can be used to optimise marketing strategies. Collecting this data can help businesses understand customer needs, behaviour patterns, and preferences.
Key areas where chatbots aid in data collection and analysis:
User interaction history: Chatbots store conversation logs, providing insights into common customer queries and issues,
Demographic data: Collecting information like age, location, and interests helps in segmenting the audience for targeted marketing,
Feedback mechanisms: Gathering feedback directly through chatbot interactions allows businesses to gauge customer satisfaction and areas for improvement.
In the realm of marketing, using chatbots can significantly boost return on investment (ROI). Two of the primary factors contributing to this increase are cost-effectiveness and enhanced lead generation and conversions.
Chatbots offer substantial cost savings for businesses. By automating customer interactions, businesses can reduce the need for a large customer service team. This not only lowers operational costs but also streamlines processes. The initial investment in chatbot technology is often offset by the long-term savings achieved through decreased labour costs and increased efficiency.
Also, chatbots can play a critical role in lead generation and conversion rates. By engaging users in real-time, chatbots can qualify leads, provide personalised recommendations, and guide users through the sales funnel. Immediate interaction can improve user experience and lead to higher conversion rates.
Utilising chatbots for marketing can lead to higher efficiency, greater customer interaction, and ultimately a better ROI. By understanding and implementing these technologies, businesses can enhance their marketing strategies and achieve substantial financial benefits.
The post Boost your ROI: The impact of chatbots on marketing appeared first on AI News.
View the full article
AI is rapidly becoming ubiquitous across business systems and IT ecosystems, with adoption and development racing faster than anyone could have expected. Today it seems that everywhere we turn, software engineers are building custom models and integrating AI into their products, as business leaders incorporate AI-powered solutions in their working environments.
However, uncertainty about the best way to implement AI is stopping some companies from taking action. Boston Consulting Group’s latest Digital Acceleration Index (DAI), a global survey of 2,700 executives, revealed that only 28% say their organisation is fully prepared for new AI regulation.
Their uncertainty is exacerbated by AI regulations arriving thick and fast: the EU AI act is on the way; Argentina released a draft AI plan; Canada has the AI and Data Act; China has enacted a slew of AI regulations; and the G7 nations launched the “Hiroshima AI process.” Guidelines abound, with the OECD developing AI principles, the UN proposing a new UN AI advisory body, and the Biden administration releasing a blueprint for an AI Bill of Rights (although that could quickly change with the second Trump administration).
Legislation is also coming in individual US states, and is appearing in many industry frameworks. To date, 21 states have enacted laws to regulate AI use in some manner, including the Colourado AI Act, and clauses in California’s CCPA, plus a further 14 states have legislation awaiting approval.
Meanwhile, there are loud voices on both sides of the AI regulation debate. A new survey from SolarWinds shows 88% of IT professionals advocate for stronger regulation, and separate research reveals that 91% of British people want the government to do more to hold businesses accountable for their AI systems. On the other hand, the leaders of over 50 tech companies recently wrote an open letter calling for urgent reform of the EU’s heavy AI regulations, arguing that they stifle innovation.
It’s certainly a tricky ******* for business leaders and software developers, as regulators scramble to catch up with tech. Of course you want to take advantage of the benefits AI can provide, you can do so in a way that sets you up for compliance with whatever regulatory requirements are coming, and don’t handicap your AI use unnecessarily while your rivals speed ahead.
We don’t have a crystal ball, so we can’t predict the future. But we can share some best practices for setting up systems and procedures that will prepare the ground for AI regulatory compliance.
Map out AI usage in your wider ecosystem
You can’t manage your team’s AI use unless you know about it, but that alone can be a significant challenge. Shadow IT is already the scourge of cybersecurity teams: Employees sign up for SaaS tools without the knowledge of IT departments, leaving an unknown number of solutions and platforms with access to business data and/or systems.
Now security teams also have to grapple with shadow AI. Many apps, chatbots, and other tools incorporate AI, machine learning (ML), or natural language programming (NLP), without such solutions necessarily being obvious AI solutions. When employees log into these solutions without official approval, they bring AI into your systems without your knowledge.
As Opice Blum’s data privacy expert Henrique Fabretti Moraes explained, “Mapping the tools in use – or those intended for use – is crucial for understanding and fine-tuning acceptable use policies and potential mitigation measures to decrease the risks involved in their utilisation.”
Some regulations hold you responsible for AI use by vendors. To take full control of the situation, you need to map all the AI in your, and your partner organisations’ environments. In this regard, using a tool like Harmonic can be instrumental in detecting AI use across the supply chain.
Verify data governance
Data privacy and security are core concerns for all AI regulations, both those already in place and those on the brink of approval.
Your AI use already needs to comply with existing privacy laws like GDPR and CCPR, which require you to know what data your AI can access and what it does with the data, and for you to demonstrate guardrails to protect the data AI uses.
To ensure compliance, you need to put robust data governance rules into place in your organisation, managed by a defined team, and backed up by regular audits. Your policies should include due diligence to evaluate data security and sources of all your tools, including those that use AI, to identify areas of potential bias and privacy risk.
“It is incumbent on organisations to take proactive measures by enhancing data hygiene, enforcing robust AI ethics and assembling the right teams to lead these efforts,” said Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds. “This proactive stance not only helps with compliance with evolving regulations but also maximises the potential of AI.”
Establish continuous monitoring for your AI systems
Effective monitoring is crucial for managing any area of your business. When it comes to AI, as with other areas of cybersecurity, you need continuous monitoring to ensure that you know what your AI tools are doing, how they are behaving, and what data they are accessing. You also need to audit them regularly to keep on top of AI use in your organisation.
“The idea of using AI to monitor and regulate other AI systems is a crucial development in ensuring these systems are both effective and ethical,” said Cache Merrill, founder of software development company Zibtek. “Currently, techniques like machine learning models that predict other models’ behaviours (meta-models) are employed to monitor AI. The systems analyse patterns and outputs of operational AI to detect anomalies, biases or potential failures before they become critical.”
Cyber GRC automation platform Cypago allows you to run continuous monitoring and regulatory audit evidence collection in the background. The no-code automation allows you to set custom workflow capabilities without technical expertise, so alerts and mitigation actions are triggered instantly according to the controls and thresholds you set up.
Cypago can connect with your various digital platforms, synchronise with virtually any regulatory framework, and turn all relevant controls into automated workflows. Once your integrations and regulatory frameworks are set up, creating custom workflows on the platform is as simple as uploading a spreadsheet.
Use risk assessments as your guidelines
It’s vital to know which of your AI tools are high risk, medium risk, and low risk – for compliance with external regulations, for internal business risk management, and for improving software development workflows. High risk use cases will need more safeguards and evaluation before deployment.
“While AI risk management can be started at any point in the project development,” Ayesha Gulley, an AI policy expert from Holistic AI, said. “Implementing a risk management framework sooner than later can help enterprises increase trust and scale with confidence.”
When you know the risks posed by different AI solutions, you can choose the level of access you’ll grant them to data and critical business systems.
In terms of regulations, the EU AI Act already distinguishes between AI systems with different risk levels, and NIST recommends assessing AI tools based on trustworthiness, social impact, and how humans interact with the system.
Proactively set AI ethics governance
You don’t need to wait for AI regulations to set up ethical AI policies. Allocate responsibility for ethical AI considerations, put together teams, and draw up policies for ethical AI use that include cybersecurity, model validation, transparency, data privacy, and incident reporting.
Plenty of existing frameworks like NIST’s AI RMF and ISO/IEC 42001 recommend AI best practices that you can incorporate into your policies.
“Regulating AI is both necessary and inevitable to ensure ethical and responsible use. While this may introduce complexities, it need not hinder innovation,” said Arik Solomon, CEO and co-founder of Cypago. “By integrating compliance into their internal frameworks and developing policies and processes aligned with regulatory principles, companies in regulated industries can continue to grow and innovate effectively.”
Companies that can demonstrate a proactive approach to ethical AI will be better positioned for compliance. AI regulations aim to ensure transparency and data privacy, so if your goals align with these principles, you’ll be more likely to have policies in place that comply with future regulation. The FairNow platform can help with this process, with tools for managing AI governance, bias checks, and risk assessments in a single location.
Don’t let ***** of AI regulation hold you back
AI regulations are still evolving and emerging, creating uncertainty for businesses and developers. But don’t let the fluid situation stop you from benefiting from AI. By proactively implementing policies, workflows, and tools that align with the principles of data privacy, transparency, and ethical use, you can prepare for AI regulations and take advantage of AI-powered possibilities.
The post Preparing today for tomorrow’s AI regulations appeared first on AI News.
View the full article
Business Insider’s “CXO AI Playbook” looks at how firms are utilising AI to tackle challenges, scale operations, and plan for the future.
The Playbook looks at stories from various industries to see what problems AI is solving, who’s driving these initiatives, and how it’s reshaping strategies.
Salesforce, well known for its CRM software used by over 150,000 companies like Amazon and Walmart, is no stranger to innovation. It also owns Slack, the popular workplace communication app. Salesforce is now stepping up its AI game with Agentforce, a platform that lets businesses to build and deploy digital agents to automate tasks such as creating sales reports and summarising Slack conversations.
What problem is it solving?
Salesforce has been working with AI for years. In 2016, it launched Einstein, an AI feature baked into its CRM platform. Einstein handled basic scriptable tasks, but the rise of generative AI brought a chance to do more. Smarter tools could now make better decisions and understand natural language.
This sparked a transformation. First came Einstein GPT, then Einstein Copilot, and now Agentforce—a platform designed for flexibility with prebuilt and customisable agents to handle diverse business needs.
“Our customers wanted more. Some wanted to tweak the agents we offer, while others wanted to create their own,” said Tyler Carlson, Salesforce’s VP of Business Development.
The tech behind it
Agentforce is powered by Salesforce’s Atlas Reasoning Engine, developed in-house. The platform connects with AI models from major players like OpenAI, Anthropic, Amazon, and Google, giving businesses access to a variety of tools.
Slack has become a testing ground for these AI agents. Currently in beta, Agentforce’s Slack integration puts automations where employees already spend their time. “Slack makes these tools easy to use and accessible,” Carlson added.
Smarter, more flexible AI
Agentforce uses ReAct prompting, a technique that helps agents break down problems into smaller steps and adjust their approach as they go. This leads to more accurate responses and hands-off task management, from answering questions to scheduling meetings.
Agentforce works with Salesforce’s proprietary LLMs and third-party models, giving clients plenty of options. To ensure security, Salesforce enforces strict data privacy policies, including limits on data retention.
Making it work for businesses
With tools like Agentbuilder, companies can design AI agents tailored to their needs. For example, an agent could sort emails or answer specific HR questions using internal data. One example is Salesforce’s collaboration with Workday to create an AI service agent for employee queries.
Salesforce is already seeing results, with Agentforce resolving 90% of customer inquiries in early trials. The goal? Broader adoption, more capabilities, and higher workloads handled by these agents.
“We’re building a ******* ecosystem of partners and skills,” Carlson said. “By next year, we want Agentforce to be a must-have for businesses.”
(Photo by Unsplash)
See also: Paul O’Sullivan, Salesforce: Transforming work in the GenAI era
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Salesforce launches AI platform for automated task management appeared first on AI News.
View the full article
A report by Publicis Sapient sheds light on the disparities between the C-suite and practitioners, dubbed the “V-suite,” in their perceptions and adoption of generative AI.
The report reveals a stark contrast in how the C-suite and V-suite view the potential of generative AI. While the C-suite focuses on visible use cases such as customer experience, service, and sales, the V-suite sees opportunities across various functional areas, including operations, HR, and finance.
Risk perception
The divide extends to risk perception as well. Fifty-one percent of C-level respondents expressed more concern about the risk and ethics of generative AI than other emerging technologies. In contrast, only 23 percent of the V-suite shared these worries.
Simon James, Managing Director of Data & AI at Publicis Sapient, said: “It’s likely the C-suite is more worried about abstract, big-picture dangers – such as Hollywood-style scenarios of a rapidly-evolving superintelligence – than the V-suite.”
The report also highlights the uncertainty surrounding generative AI maturity. Organisations can be at various stages of maturity simultaneously, with many struggling to define what success looks like. More than two-thirds of respondents lack a way to measure the success of their generative AI projects.
Navigating the generative AI landscape
Despite the C-suite’s focus on high-visibility use cases, generative AI is quietly transforming back-office functions. More than half of the V-suite respondents ranked generative AI as extremely important in areas like finance and operations over the next three years, compared to a smaller percentage of the C-suite.
To harness the full potential of generative AI, the report recommends a portfolio approach to innovation projects. Leaders should focus on delivering projects, controlling shadow IT, avoiding duplication, empowering domain experts, connecting business units with the CIO’s office, and engaging the risk office early and often.
Daniel Liebermann, Managing Director at Publicis Sapient, commented: “It’s as hard for leaders to learn how individuals within their organisation are using ChatGPT or Microsoft Copilot as it is to understand how they’re using the internet.”
The path forward
The report concludes with five steps to maximise innovation: adopting a portfolio approach, improving communication between the CIO’s office and the risk office, seeking out innovators within the organisation, using generative AI to manage information, and empowering team members through company culture and upskilling.
As generative AI continues to evolve, organisations must bridge the gap between the C-suite and V-suite to unlock its full potential. The future of business transformation ***** in harnessing the power of a decentralised, bottom-up approach to innovation.
See also: EU introduces draft regulatory guidance for AI models
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Generative AI: Disparities between C-suite and practitioners appeared first on AI News.
View the full article
The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models.
The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each addressing specific aspects of AI governance and risk mitigation:
Working Group 1: Transparency and copyright-related rules
Working Group 2: Risk identification and assessment for systemic risk
Working Group 3: Technical risk mitigation for systemic risk
Working Group 4: Governance risk mitigation for systemic risk
The draft is aligned with existing laws such as the Charter of Fundamental Rights of the ********* Union. It takes into account international approaches, striving for proportionality to risks, and aims to be future-proof by contemplating rapid technological changes.
Key objectives outlined in the draft include:
Clarifying compliance methods for providers of general-purpose AI models
Facilitating understanding across the AI value chain, ensuring seamless integration of AI models into downstream products
Ensuring compliance with Union law on copyrights, especially concerning the use of copyrighted material for model training
Continuously assessing and mitigating systemic risks associated with AI models
Recognising and mitigating systemic risks
A core feature of the draft is its taxonomy of systemic risks, which includes types, natures, and sources of such risks. The document outlines various threats such as cyber offences, biological risks, loss of control over autonomous AI models, and large-scale disinformation. By acknowledging the continuously evolving nature of AI technology, the draft recognises that this taxonomy will need updates to remain relevant.
As AI models with systemic risks become more common, the draft emphasises the need for robust safety and security frameworks (SSFs). It proposes a hierarchy of measures, sub-measures, and key performance indicators (KPIs) to ensure appropriate risk identification, analysis, and mitigation throughout a model’s lifecycle.
The draft suggests that providers establish processes to identify and report serious incidents associated with their AI models, offering detailed assessments and corrections as needed. It also encourages collaboration with independent experts for risk assessment, especially for models posing significant systemic risks.
Taking a proactive stance to AI regulatory guidance
The EU AI Act, which came into force on 1 August 2024, mandates that the final version of this Code be ready by 1 May 2025. This initiative underscores the EU’s proactive stance towards AI regulation, emphasising the need for AI safety, transparency, and accountability.
As the draft continues to evolve, the working groups invite stakeholders to participate actively in refining the document. Their collaborative input will shape a regulatory framework aimed at safeguarding innovation while protecting society from the potential pitfalls of AI technology.
While still in draft form, the EU’s Code of Practice for general-purpose AI models could set a benchmark for responsible AI development and deployment globally. By addressing key issues such as transparency, risk management, and copyright compliance, the Code aims to create a regulatory environment that fosters innovation, upholds fundamental rights, and ensures a high level of consumer protection.
This draft is open for written feedback until 28 November 2024.
See also: Anthropic urges AI regulation to avoid catastrophes
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post EU introduces draft regulatory guidance for AI models appeared first on AI News.
View the full article
Microsoft Edge has evolved into more than simply a browser; it is a critical component of Microsoft’s ecosystem, meant to integrate smoothly with Windows and highlight the company’s latest innovations, such as its AI assistant, Copilot.
While these interconnections make Edge a viable choice, Microsoft’s methods for persuading consumers to choose it have been far from covert.
From default settings that prioritise Edge to persistent prompts at startup, Microsoft has made it clear they want Edge to be the go-to for Windows users. And lately, it’s upped the ante: now, Edge can launch automatically when your computer boots up, instantly nudging you to bring over your data from other browsers.
The most recent update includes an auto-checked option to import browsing data from Chrome, such as history, bookmarks, and open tabs, in the name of users leveraging the features of AI assistant, Copilot. Although AI features may be appealing to some, the aggressive approach has left many users feeling annoyed rather than tempted.
The Verge recently noticed that when you start up your PC, Edge might decide to open on its own, promptly displaying a pop-up for its AI assistant, Copilot. Right next to Copilot, there’s a conveniently checked box allowing Edge to import data from other browsers automatically. For some users, this seems like an overreach, raising doubts about how far Microsoft is ready to go to make Edge the browser of choice.
Microsoft has confirmed this setup and stated that customers have the option to opt-out. Still, with default settings that favour data imports and an eye-catching import button, it’s easy for users to unintentionally make the switch, especially if they’re not paying attention. For those who prefer sticking with their existing browsers without interruption, the approach can feel unwelcome.
But even if users dodge the pop-ups, Edge isn’t exactly shy. Uninstalling it is a complex process, and it often gets reinstalled by Windows updates, much to the frustration of users who would rather go without. For many, this persistence feels more like a forceful sales pitch rather than a friendly suggestion.
Interestingly, this isn’t the first time Microsoft has tried this type of strategy. A similar message appeared to users earlier this year but was pulled back after strong objections. Now, it’s back, with Microsoft’s Caitlin Roulston stating the notification is meant to “give users the choice to import data from other browsers.”
In fact, Microsoft’s bold tactics go back some years. In 2022, it introduced a feature that could automatically pull data from Chrome into Edge – although users had the option to decline. In 2021, the company made it practically impossible to set any browser other than Edge as the default, resulting in enough outcry for Microsoft to back down.
While Microsoft promotes its intrusive pop-ups as a way to give users more control, others who value choice without constant nudges. The relentless push for Edge usage could actually be detrimental, as the company’s persistence may drive users toward other browsers rather than away. To truly compete, Microsoft might benefit from letting Edge’s strengths speak for themselves rather than relying on aggressive prompts to change hearts and minds.
(Photo by Surface)
See also: EU probes Microsoft-OpenAI and Google-Samsung AI deals
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Microsoft tries to convert Google Chrome users appeared first on AI News.
View the full article
Did you know that effective asset management practices pose challenges for almost half of small businesses? According to the latest research, 43% of businesses either manually report their inventory or in a few cases, do not record assets in any manner.
However, asset management is not immune to the disruptive pressure of artificial intelligence (AI) currently revolutionising numerous industries. The manner in which corporations manage their tangible and intangible assets is undergoing a profound transformation due to the evolving technology of AI. This blog will discover how AI-driven fixed asset software softwares transform asset management and what the future holds for businesses embedding those innovations.
Introduction to fixed asset management and AI
Fixed asset management is a critical feature for organisations to manage, control, and optimise the value of their physical assets. Assets can include everything from equipment and vehicles to home computer systems. Traditionally, manual asset management systems entail manual report maintenance and periodic audits, which can be time-consuming and susceptible to human error.
AI-driven fixed assets software offers a modern solution by automating diverse asset control factors. This guarantees accuracy, reduces administrative overhead, and increases an asset’s useful life, ultimately contributing to significant cost savings. AI, blended with the Internet of Things (IoT), machine learning (ML), and predictive analytics, is the primary method to develop smart, efficient, and scalable asset management solutions.
The predictive capacities of AI revolutionise proactive asset management. AI can predict when a piece of hardware is likely to fail or spot chances for optimisation by evaluating patterns and trends in data. The proactive strategy not only helps with strategic planning but also ensures the reliability of operations by preventing system outages that can cause serious disruptions to business operations and financial losses. Businesses may use AI to ensure their assets operate at peak efficiency, quickly adopt new technologies, and match operations to corporate goals.
AI’s advantages for fixed asset software
AI-driven fixed asset software has numerous advantages for businesses, particularly in sectors where asset management is vital to daily operations, like production, healthcare, and logistics.
Greater effectiveness: Automation significantly speeds up asset tracking, control, and upkeep. As AI can assess huge amounts of information in real time, managers can respond immediately to determine the state of their assets.
Cost savings: Ongoing asset utilisation and predictive analysis can result in lower operating costs. AI is capable of identifying underutilised or poorly functioning items, which may assist corporations in saving money by reallocating or disposal schedules.
Enhanced compliance and reporting: Staying compliant can be challenging with increasingly stringent regulatory governance. AI ensures that compliance reports are generated accurately and on time. Moreover, the software can routinely modify asset data to mirror regulatory changes, ensuring that companies consistently comply with laws.
Improved decision-making: With AI’s analytics capabilities, managers can make better choices about which assets to invest in, when to repair, and when to retire an asset. Selections are based on real-time information and predictive models instead of guesswork or manual calculations.
Case study: Predictive portfolio management precision issue:
Predicting market trends and real-time portfolio optimisation was complicated for a top asset management company. Conventional approaches could not keep up with market demands, resulting in lost opportunities and less-than-ideal results.
Solution:
The company was able to quickly evaluate large datasets by implementing an AI-powered predictive analytics system. The AI algorithms examined market patterns, assessed risk factors, and dynamically altered the portfolio. The end result was a notable improvement in portfolio performance and increased forecasting accuracy.
Findings:
A 20% boost in portfolio returns was attained.
Real-time market trend information improved decision-making.
The future of AI in asset management
The future of asset management will revolutionise customer satisfaction, operational effectiveness, and decision-making. Below are the important elements that will transform asset management operations:
1) Elevated decision making
By revealing hidden patterns from huge datasets, AI will permit asset managers to make better decisions. AI can evaluate the whole portfolio, compiling financial statistics and market news, which together will improve risk posture and portfolio formulation. AI will also make real-time adaptation feasible, preparing managers for future predictions and staying ahead of marketplace swings.
2) Automation and operational efficiency
Robo-advisors will become necessary tools, autonomously managing tasks like portfolio rebalancing and standard operations. AI’s algorithmic training will ******** decisions quickly, decreasing human intervention and cutting costs. AI will automate tedious back-office operations, including data entry and regulatory compliance procedures, ensuring smooth, streamlined workflows.
3) Client experience transformation
In the future, client interactions will become customised and more responsive. AI will analyse purchaser information to provide tailored funding recommendations, and AI-powered chatbots will be available 24/7 to answer queries. The technology can even simplify reporting, turning complex economic information into easily digestible, jargon-free insights, building trust and transparency in customer relationships.
Conclusion:
The future of asset management is undeniably tied to improvements in AI technology. AI-driven fixed asset software is already impacting asset monitoring, predictive analytics, and risk management by optimisation and automation. As hyper automation and IoT continue to adapt, the possibilities for remodeling asset management are limitless.
(Photo source)
The post Using AI technologies for future asset management appeared first on AI News.
View the full article
Japan is on a mission to become a global AI powerhouse, and it’s starting with some impressive advances in AI-driven language models. ********* technology experts are developing advanced models that grasp the unique nuances of the ********* language and culture—essential for industries such as healthcare, finance, and manufacturing – where precision is key.
But this effort isn’t Japan’s alone. Consulting giants like Accenture, Deloitte, EY Japan, FPT, Kyndryl, and TCS Japan are partnering with NVIDIA to create AI innovation hubs across the country. The centres are using NVIDIA’s AI software and specialised ********* language models to build tailored AI solutions, helping industries boost productivity in a digital workforce. The goal? To get ********* companies fully on board with enterprise and physical AI.
One standout technology supporting the drive is NVIDIA’s Omniverse platform. With Omniverse, ********* companies can create digital twins—virtual replicas of real-world assets—and test complex AI systems safely before implementing them. This is a game-changer for industries such as manufacturing and robotics, allowing businesses to fine-tune processes without the risk of real-world trial and error. This use of AI is more than just innovation; it represents Japan’s plan for addressing some major challenges ahead.
Japan faces a shrinking workforce presence as its population ages. With its strengths in robotics and automation, Japan is well-positioned to use AI solutions to bridge the gap. In fact, Japan’s government recently shared its vision of becoming “the world’s most AI-friendly country,” underscoring the perceived role AI will play in the nation’s future.
Supporting this commitment, Japan’s AI market hit $5.9 billion in value this year; a 31.2% growth rate according to IDC. New AI-focused consulting centres in Tokyo and Kansai give ********* businesses hands-on access to NVIDIA’s latest technologies, equipping them to solve social challenges and aid economic growth.
Top cloud providers like SoftBank, GMO Internet Group, KDDI, Highreso, Rutilea, and SAKURA Internet are also involved, working with NVIDIA to build AI infrastructure. Backed by Japan’s Ministry of Economy, Trade and Industry, they’re establishing AI data centres across Japan to accelerate growth in robotics, automotive, healthcare, and telecoms.
NVIDIA and SoftBank have also formed a remarkable partnership to build Japan’s most powerful AI supercomputer using NVIDIA’s Blackwell platform. Additionally, SoftBank has tested the world’s first AI and 5G hybrid telecoms network with NVIDIA’s AI Aerial platform, allowing Japan to set a worldwide standard. With these developments, Japan is taking big strides toward establishing itself as a leader in the AI-powered industrial revolution.
(Photo by Andrey Matveev)
See also: NVIDIA’s share price nosedives as antitrust clouds gather
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post NVIDIA AI Summit Japan: NVIDIA’s role in Japan’s big AI ambitions appeared first on AI News.
View the full article
AI continues to transform industries, and having the right skills can make a significant difference to your career. Professionals wishing to get into this evolving field can take advantage of a variety of specialised courses that teach how to use AI in business, creativity, and data analysis.
Artificial Intelligence: Preparing Your Career for AI
Artificial Intelligence: Preparing Your Career for AI is an option for those wanting to future-proof their careers in an AI-centric workplace. The course outlines five essential steps for preparing for AI’s impact on job roles and skill requirements. Participants learn the basics of AI, strategies for aligning their career paths with AI advancements, and how to use AI responsibly.
The course is ideal for individuals at any career stage who wish to understand AI’s impact on the job market and adapt proactively.
Generative AI for Leaders
For business leaders, Generative AI for Leaders focuses on integrating AI into organisation’s strategy. AI’s rapid advancement offers both opportunities and challenges for business leaders, who must balance innovation with ethical and operational concerns.
In this course, participants learn strategies for building AI-driven business initiatives and fostering collaboration, and learn how to address compliance and ethical considerations.
With a practical look at AI trends, this course prepares leaders to develop a culture that supports AI adoption and equips them with the tools needed to make informed decisions.
Business Analyst: Digital Director for AI and Data Science
Business Analyst: Digital Director for AI and Data Science is a course designed for business analysts and professionals explaining how to define requirements for data science and artificial intelligence projects. The course covers the requirements elicitation process for AI applications and teaches participants how to work closely with data scientists and machine learning engineers to ensure that AI projects meet business goals.
Learners gain insights into conversational AI tools, the differences between Natural Language Understanding (NLU) bots and rule-based bots, and best practices in conversation flow analysis. For business analysts, the course provides essential skills to guide AI initiatives that deliver real business value.
Prompt Engineering+: Master Speaking to AI
One valuable course is Prompt Engineering+: Master Speaking to AI, which teaches the art of creating precise instructions for generative AI models. ‘Prompt engineering’ is essential for situations in which human intent must be accurately translated into AI output.
The course covers prompt structure, including one-shot, few-shot, and zero-shot learning, as well as fundamental skills like natural language processing and Python programming. Students work with leading models including ChatGPT, Google Gemini, and DALL-E, and learn practical methods to refine and test prompts, control model output, and tackle inaccuracies.
For those looking to work directly with generative AI, this course provides a foundational skill set to optimise AI interactions.
Canva AI: Master Canva AI Tools and Apps 2024
Content creators can benefit from Canva AI: Master Canva AI Tools and Apps 2024, a course focused on using Canva’s AI-driven tools to streamline and enhance content production. This course introduces participants to Canva’s Magic Studio, where they explore tools for creating engaging social media posts, PDFs, videos, and presentations. From text-to-image conversions to speaking avatars, the course delves into AI tools that help creators produce content efficiently.
Through hands-on projects, learners experience Canva AI’s capabilities, enabling them to produce a wide variety of content quickly and effectively—a valuable skill for social media, marketing, and creative professionals.
Conclusion
These courses offer a comprehensive toolkit for mastering AI skills in various fields. Embracing these opportunities can empower professionals to lead, create, and adapt in an AI-driven organisations. Whether you’re a business leader, a content creator, or a data professional, investing in AI skills prepares you to navigate the future with confidence and purpose.
See also: Understanding AI’s impact on the workforce
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post AI courses to boost your skills and stay ahead appeared first on AI News.
View the full article
OpenAI is facing diminishing returns with its latest AI model while navigating the pressures of recent investments.
According to The Information, OpenAI’s next AI model – codenamed Orion – is delivering smaller performance gains compared to its predecessors.
In employee testing, Orion reportedly achieved the performance level of GPT-4 after completing just 20% of its training. However, the transition from GPT-4 to the anticipated GPT-5 is said to exhibit smaller quality improvements than the leap from GPT-3 to GPT-4.
“Some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks,” stated employees in the report. “Orion performs better at language tasks but may not outperform previous models at tasks such as coding, according to an OpenAI employee.”
Early stages of AI training usually yield the most significant improvements, while subsequent phases typically result in smaller performance gains. Consequently, the remaining 80% of training is unlikely to deliver advancements on par with previous generational improvements.
This situation with its latest AI model emerges at a pivotal time for OpenAI, following a recent funding round that saw the company raise $6.6 billion. With this financial backing comes increased expectations from investors, as well as technical challenges that complicate traditional scaling methodologies in AI development.
If these early versions do not meet expectations, OpenAI’s future fundraising prospects may not attract the same level of interest.
The limitations highlighted in the report underline a significant challenge confronting the entire AI industry: the diminishing availability of high-quality training data and the necessity to maintain relevance in an increasingly competitive field.
According to a paper (PDF) that was published in June, AI firms will deplete the pool of publicly available human-generated text data between 2026 and 2032. The Information notes that developers have “”largely squeezed as much out of” the data that has been used for enabling the rapid AI advancements we’ve seen in recent years.
To address these challenges, OpenAI is fundamentally rethinking its AI development strategy.
“In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law,” explains The Information.
As OpenAI navigates these challenges, the company must balance innovation with practical application and investor expectations. However, the ongoing exodus of leading figures from the company won’t help matters.
(Photo by Jukan Tateisi)
See also: ASI Alliance launches AIRIS that ‘learns’ in Minecraft
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post OpenAI faces diminishing returns with latest AI model appeared first on AI News.
View the full article
The Tony Blair Institute (TBI) has examined AI’s impact on the workforce. The report outlines AI’s potential to reshape work environments, boost productivity, and create opportunities—while warning of potential challenges ahead.
“Technology has a long history of profoundly reshaping the world of work,” the report begins.
From the agricultural revolution to the digital age, each wave of innovation has redefined labour markets. Today, AI presents a seismic shift, advancing rapidly and prompting policymakers to prepare for change.
Economic opportunities
The TBI report estimates that AI, when fully adopted by *** firms, could significantly increase productivity. It suggests that AI could save “almost a quarter of private-sector workforce time,” equivalent to the annual output of 6 million workers.
Most of these time savings are expected to stem from AI-enabled software performing cognitive tasks such as data analysis and routine administrative operations.
The report identifies sectors reliant on routine cognitive tasks, such as banking and finance, as those with significant exposure to AI. However, sectors like skilled trades or construction – which involve complex manual tasks – are likely to see less direct impact.
While AI can result in initial job losses, it also has the potential to create new demand by fostering economic growth and new industries.
The report expects these job losses can be balanced by new job creation. Over the years, technology has historically spurred new employment opportunities, as innovation leads to the development of new products and services.
Shaping future generations
AI’s potential extends into education, where it could assist both teachers and students.
The report suggests that AI could help “raise educational attainment by around six percent” on average. By personalising and supporting learning, AI has the potential to equalise access to opportunities and improve the quality of the workforce over time.
Health and wellbeing
Beyond education, AI offers potential benefits in healthcare, supporting a healthier workforce and reducing ******** costs.
The report highlights AI’s role in speeding medical research, enabling preventive healthcare, and helping those with disabilities re-enter the workforce.
Workplace transformation
The report acknowledges potential workplace challenges, such as increased monitoring and stress from AI tools. It stresses the importance of managing these technologies thoughtfully to “deliver a more engaging, inclusive and safe working environment.”
To mitigate potential disruption, the TBI outlines recommendations. These include upgrading labour-market infrastructure and utilising AI for job matching.
The report suggests creating an “Early Awareness and Opportunity System” to help workers understand the impact of AI on their jobs and provide advice on career paths.
Preparing for an AI-powered future
In light of the uncertainties surrounding AI’s impact on the workforce, the TBI urges policy changes to maximise benefits. Recommendations include incentivising AI adoption across industries, developing AI-pathfinder programmes, and creating challenge prizes to address public-sector labour shortages.
The report concludes that while AI presents risks, the potential gains are too significant to ignore.
Policymakers are encouraged to adopt a “pro-innovation” stance while being attuned to the risks, fostering an economy that is dynamic and resilient.
(Photo by Mimi Thian)
See also: Anthropic urges AI regulation to avoid catastrophes
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Understanding AI’s impact on the workforce appeared first on AI News.
View the full article
The intelligence displayed by generative AI chatbots like OpenAI’s ChatGPT has captured the imagination of individuals and corporations, and artificial intelligence has suddenly become the most exciting area of technology innovation.
AI has been recognised as a game changer, with potential to transform many aspects of our lives. From personalised medicine to autonomous vehicles, automated investments to digital assets, the possibilities enabled by AI seem endless.
But as transformational as AI will be, there are a lot of risks posed by this new technology. While fears about a malicious, Skynet-style AI system going rogue are misplaced, the dangers of AI centralisation are not. As companies like Microsoft, Google and Nvidia forge ahead in their pursuit of AI, fears about the concentration of power in the hands of just a few centralised players are becoming more pronounced.
Why should we worry about decentralised AI?
Monopoly power
The most pressing issue arising from centralised AI is the prospect of a few tech giants achieving monopolistic control over the industry. The big tech giants have already accumulated a very significant market share in AI, giving them possession of vast amounts of data. They also control the infrastructure that AI systems run on, enabling them to stifle their competitors, hobble innovation, and perpetuate economic inequality.
By achieving a monopoly over the development of AI, these companies are more likely to have an unfair influence on regulatory frameworks, which they can manipulate to their advantage. It will mean that smaller startups, which lack the enormous resources of big tech giants, will struggle to keep up with the pace of innovation. Those that do survive and look like they might thrive will almost certainly end up being acquired, further concentrating power in the hands of the few. The result will be less diversity in terms of AI development, fewer choices for consumers, and less favourable terms, limiting the use-cases and economic opportunities promised by AI.
Bias and Discrimination
Aside from monopolistic control, there are genuine fears around the bias of AI systems, and these concerns will take on more importance as society increasingly relies on AI.
The risk stems from the fact that organisations are becoming more reliant on automated systems to make decisions in many areas. It’s not unusual for a company to employ AI algorithms to filter job applicants, for example, and the risk is that a biased system could unfairly exclude a subset of candidates based on their ethnicity, age or location. AI is also used by insurance companies to set policy rates, by financial services firms to determine if someone qualifies for a loan and the amount of interest they’ll need to pay, and by law enforcement to determine which areas are more likely to see higher ******. In all of these use-cases, the potential implications of biased AI systems are extremely worrying.
Whether it’s law enforcement targeting ********* communities, discriminatory lending practices or something else, centralised AI can potentially exacerbate social inequality and enable systemic discrimination.
Privacy and surveillance
Another risk posed by centralised AI systems is the lack of privacy protections. When just a few big companies control the vast majority of data generated by AI, they gain the ability to carry out unprecedented surveillance on their users. The data accumulated by the most dominant AI platforms can be used to monitor, analyse and predict an individual’s behaviour with incredible accuracy, eroding privacy and increasing the potential for the information to be misused.
It’s of particular concern in countries with authoritarian governments, where data can be weaponised to create more sophisticated tools for monitoring citizens. But even in democratic societies, there is a threat posed by increased surveillance, as exemplified by the revelations of Edward Snowden about the US National Security Agency’s Prism program.
Corporations can also potentially misuse consumer’s data to increase their profits. In addition, when centralised entities accumulate vast amounts of sensitive data, this makes them more lucrative targets for hackers, increasing the risk of data leaks.
Security risks
Issues of national security can also arise due to centralised AI. For instance, there are justified fears that AI systems can be weaponised by nations, used to conduct cyberwarfare, engage in espionage, and develop new weapons systems. AI could become a key tool in future wars, raising the stakes in geopolitical conflicts.
AI systems themselves can also be targeted. As nations increase their reliance on AI, such systems will make for enticing targets, as they are obvious single points of ********. Take out an AI system and you could disrupt the entire traffic flow of cities, take down electrical grids, and more.
Ethics
The other major concern of centralised AI is about ethics. That’s because the handful of companies that control AI systems would gain substantial influence over a society’s cultural norms and values, and might often prioritise profit, creating further ethical concerns.
For example, AI algorithms are already being used widely by social media platforms to moderate content, in an attempt to identify and filter out offensive posts. The worry is that algorithms, either by accident or design, might end up suppressing free speech.
There is already controversy about the effectiveness of AI-powered moderation systems, with numerous seemingly innocuous posts being blocked or taken down by automated algorithms. This leads to speculation that such systems are not broken but being manipulated behind the scenes based on the political narrative the platform is trying to promote.
The alternative? Decentralised AI
The only logical counterweight to centralised AI is the development of decentralised AI systems that ensure that control of the technology ******** in the hands of the majority, rather than the few. By doing this, we can ensure that no single company or entity gains a significant influence over the direction of AI’s development.
When the development and governance of AI is shared by thousands or millions of entities, its progress will be more equitable, with greater alignment to the needs of the individual. The result will be more diverse AI applications, with an almost endless selection of models used by different systems, instead of a few models that dominate the industry.
Decentralised AI systems will also mean checks and balances against the risk of mass surveillance and manipulation of data. Whereas centralised AI can be weaponised and used in a way that’s contrary to the interests of the many, decentralised AI hedges against this kind of oppression.
The main advantage of decentralised AI is that everyone is in control over the technology’s evolution, preventing any single entity from gaining an outsized influence over its development.
How to decentralise AI
Decentralised AI involves a rethink of the layers that make up the AI technology stack, including elements like the infrastructure (compute and networking resources), the data, models, training, inference, and fine-tuning processes.
We can’t just put our hopes in open-source models if the underlying infrastructure ******** fully centralised by cloud computing giants like Amazon, Microsoft and Google, for instance. We need to ensure that every aspect of AI is decentralised
The best way to decentralise the AI stack is to break it down into modular components and create markets around them based on supply and demand. One such example of how this can work is Spheron, which has created a Decentralised Physical Infrastructure Network (DePIN) that anyone can participate in.
With Spheron’s DePIN, everyone is free to share their underutilised computing resources, essentially renting them out to those who need infrastructure to host their AI applications. So, a graphic designer who uses a powerful laptop with a GPU can donate processing power to the DePIN when they’re not using it for their own work, and be rewarded with token incentives.
What this means is that the AI infrastructure layer becomes widely distributed and decentralised, with no single provider in control. It’s enabled by blockchain technology and smart contracts, which provide transparency, immutability and automation.
DePIN can also work for open-source models and underlying data. For instance, it’s possible to share training datasets on a decentralised network like Qubic, which will make sure the provider of that data is rewarded each time their information is accessed by an AI system.
To ensure access and permissions are decentralised, every part of the technology stack is distributed in this way. However, the AI industry currently struggles to provide such a level of decentralisation. Although open-source models have become extremely popular among AI developers, most people continue to rely on proprietary cloud networks, meaning the training and inference processes are heavily centralised.
But there are strong incentives for decentralisation to win out. One of the primary advantages of DePIN networks, for example, is that they help to reduce overheads. Because networks like Spheron don’t rely on intermediaires, participants don’t need to make any payments or share revenue with third-parties. Moreover, they can afford to be more competitive in terms of pricing than corporations that are under pressure to grow profitability.
Decentralisation must win
The future of AI holds a lot of potential, but it’s also perilous. While the capabilities of AI systems have improved dramatically in the last few years, most of the advances have been made by all-powerful companies and that has resulted in an increase in their influence over the industry. There’s a price to pay for this, not just in monetary terms.
The only reasonable alternative is to promote the greater adoption of decentralised AI, which can enhance accessibility and ensure a greater flexibility of AI. By allowing everyone to participate in the development of AI on an equal footing, we’ll see more diverse, interesting, and useful applications that can benefit everyone equally, as well as putting their users first.
Building a decentralised AI future will involve a great deal of coordination and collaboration across every layer of the AI stack. Fortunately, there are strong incentives for participants to do just that. And again, the incentives are not just monetary.
The post Centralised AI is dangerous: how can we stop it? appeared first on AI News.
View the full article
The ASI Alliance has introduced AIRIS (Autonomous Intelligent Reinforcement Inferred Symbolism) that “learns” within the popular game, Minecraft.
AIRIS represents the first proto-AGI (Artificial General Intelligence) to harness a comprehensive tech stack across the alliance.
SingularityNET, founded by renowned AI researcher Dr Ben Goertzel, uses agent technology from Fetch.ai, incorporates Ocean Data for long-term memory capabilities, and is soon expected to integrate CUDOS Compute infrastructure for scalable processing power.
“AIRIS is a significant step in the direction of practical, scalable neural-symbolic learning, and – alongside its already powerful and valuable functionality – it illustrates several general points about neural-symbolic systems, such as their ability to learn precise generalisable conclusions from small amounts of data,” explains Goertzel.
According to the company, this alliance-driven procedure propels AIRIS towards AGI—crafting one of the first intelligent systems with autonomous and adaptive learning that holds practical applications for real-world scenarios.
AIRIS’ learning mechanisms
AIRIS is crafted to enhance its understanding by interacting directly with its environment, venturing beyond the traditional AI limitations that depend on predefined rules or vast datasets. Instead, AIRIS evolves through observation, experimentation, and continual refinement of its unique “rule set.”
This system facilitates a profound level of problem-solving and contextual comprehension, with its implementation in Minecraft setting a new benchmark for AI interaction with both digital and tangible landscapes.
pic.twitter.com/jTeQFulzFJ — Artificial Superintelligence Alliance (@ASI_Alliance) November 5, 2024
Shifting from a controlled 2D grid to the sophisticated 3D world of Minecraft, AIRIS faced numerous challenges—including terrain navigation and adaptive problem-solving in a dynamic environment. This transition underscores AIRIS’ autonomy in navigation, exploration, and learning.
The AIRIS Minecraft Agent distinguishes itself from other AI entities through several key features:
Dynamic navigation: AIRIS initially evaluates its milieu to formulate movement strategies, adapting to new environments in real-time. Its capabilities include manoeuvring around obstacles, jumping over barriers, and anticipating reactions to varied terrains.
Obstacle adaptation: It learns to navigate around impediments like cliffs and forested areas, refining its rule set with every new challenge to avoid redundant errors and minimise needless trial-and-error efforts.
Efficient pathfinding: Via continuous optimisation, AIRIS advances from initially complex navigation paths to streamlined, direct routes as it “comprehends” Minecraft dynamics.
Real-time environmental adaptation: Contrasting with conventional reinforcement learning systems that demand extensive retraining for new environments, AIRIS adapts immediately to unfamiliar regions, crafting new rules based on partial observations dynamically.
AIRIS’ adeptness in dealing with fluctuating terrains, including water bodies and ***** systems, introduces sophisticated rule refinement founded on hands-on experience. Additionally, AIRIS boasts optimised computational efficiency—enabling real-time management of complex rules without performance compromises.
Future applications
Minecraft serves as an excellent launchpad for AIRIS’ prospective applications, establishing a solid foundation for expansive implementations:
Enhanced object interaction: Forthcoming stages will empower AIRIS to engage more profoundly with its surroundings, improving capabilities in object manipulation, construction, and even crafting. This development will necessitate AIRIS to develop a more refined decision-making framework for contextual tasks.
Social AI collaboration: Plans are underway to incorporate AIRIS in multi-agent scenarios, where agents learn, interact, and fulfil shared objectives, simulating real-world social dynamics and problem-solving collaboratively.
Abstract and strategic reasoning: Expanded developments will enhance AIRIS’s reasoning, enabling it to tackle complex goals such as resource management and prioritisation, moving beyond basic navigation towards strategic gameplay.
The transition of AIRIS to 3D environments signifies a pivotal advancement in the ASI Alliance’s mission to cultivate AGI. Through AIRIS’s achievements in navigating and learning within Minecraft, the ASI Alliance aspires to expedite its deployment in the real world, pioneering applications for autonomous robots, intelligent home assistants, and other systems requiring adaptive learning and problem-solving capacities.
Berick Cook, AI Developer at SingularityNET and creator of AIRIS, said: “AIRIS is a whole new way of approaching the problem of machine learning. We are only just beginning to explore its capabilities. We are excited to see how we can apply it to problems that have posed a significant challenge for traditional reinforcement learning.
“The most important aspect of AIRIS to me is its transparency and explainability. Moving away from ‘****** Box’ AI represents a significant leap forward in the pursuit of safe, ethical, and beneficial AI.”
The innovative approach to AI evident in AIRIS – emphasising self-directed learning and continuous rule refinement – lays the foundation for AI systems capable of independent functioning in unpredictable real-world environments. Minecraft’s intricate ecosystem enables the system to hone its skills within a controlled yet expansive virtual setting, effectively bridging the divide between simulation and reality.
The AIRIS Minecraft Agent represents the inaugural tangible step towards an AI that learns from, adapts to and makes autonomous decisions about its environment. This accomplishment illustrates the potential of such technology to re-envision AI’s role across various industries.
(Image by SkyeWeste)
See also: SingularityNET bets on supercomputer network to deliver AGI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post ASI Alliance launches AIRIS that ‘learns’ in Minecraft appeared first on AI News.
View the full article
The combination of artificial intelligence and policymaking can occasionally have unforeseen repercussions, as seen recently in Alaska.
In an unusual turn of events, Alaska legislators reportedly used AI-generated citations that were inaccurate to justify a proposed policy banning cellphones in schools. As reported by /The Alaska Beacon/, Alaska’s Department of Education and Early Development (DEED) presented a policy draft containing references to academic studies that simply did not exist.
The situation arose when Alaska’s Education Commissioner, Deena Bishop, used generative AI to draft the cellphone policy. The document produced by the AI included supposed scholarly references that were neither verified nor accurate, yet the document did not disclose the use of AI in its preparation. Some of the AI-generated content reached the Alaska State Board of Education and Early Development before it could be reviewed, potentially influencing board discussions.
Commissioner Bishop later claimed that AI was used only to “create citations” for an initial draft and asserted that she corrected the errors before the meeting by sending updated citations to board members. However, AI “hallucinations”—fabricated information generated when AI attempts to create plausible yet unverified content—were still present in the final document that was voted on by the board.
The final resolution, published on DEED’s website, directs the department to establish a model policy for cellphone restrictions in schools. Unfortunately, the document included six citations, four of which seemed to be from respected scientific journals. However, the references were entirely made up, with URLs that led to unrelated content. The incident shows the risks of using AI-generated data without proper human verification, especially when making policy rulings.
Alaska’s case is not one of a kind. AI hallucinations are increasingly common in a variety of professional sectors. For example, some legal professionals have faced consequences for using AI-generated, fictitious case citations in court. Similarly, academic papers created using AI have included distorted data and fake sources, presenting serious credibility concerns. When left unchecked, generative AI algorithms, which are meant to produce content based on patterns rather than factual accuracy, can easily produce misleading citations.
The reliance on AI-generated data in policymaking, particularly in education, carries significant risks. When policies are developed based on fabricated information, they may misallocate resources and potentially harm students. For instance, a policy restricting cellphone use based on fabricated data may divert attention from more effective, evidence-based interventions that could genuinely benefit students.
Furthermore, using unverified AI data can erode public trust in both the policymaking process and AI technology itself. Such incidents underscore the importance of fact-checking, transparency, and caution when using AI in sensitive decision-making areas, especially in education, where impact on students can be profound.
Alaska officials attempted to downplay the situation, referring to the fabricated citations as “placeholders” intended for later correction. However, the document with the “placeholders” was still presented to the board and used as the basis for a vote, underscoring the need for rigorous oversight when using AI.
(Photo by Hartono Creative Studio)
See also: Anthropic urges AI regulation to avoid catastrophes
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post AI hallucinations gone wrong as Alaska uses fake stats in policy appeared first on AI News.
View the full article
A global study has unveiled a stark contrast in attitudes towards embracing in-vehicle AI between Eastern and Western markets, with ********* drivers particularly reluctant.
The research – conducted by MHP – surveyed 4,700 car drivers across China, the US, Germany, the ***, Italy, Sweden, and Poland, revealing significant geographical disparities in AI acceptance and understanding.
According to the study, while AI is becoming integral to modern vehicles, ********* consumers remain hesitant about its implementation and value proposition.
Regional disparities
The study found that 48 percent of ******** respondents view in-car AI predominantly as an opportunity, while merely 23 percent of ********* respondents share this optimistic outlook. In Europe, 39 percent believe AI’s opportunities and risks are broadly balanced, while 24 percent take a negative stance, suggesting the risks outweigh potential benefits.
Understanding of AI technology also varies significantly by region. While over 80 percent of ******** respondents claim to understand AI’s use in cars, this figure drops to just 54 percent among ********* drivers, highlighting a notable knowledge gap.
Marcus Willand, Partner at MHP and one of the study’s authors, notes: “The figures show that the prospect of greater safety and comfort due to AI can motivate purchasing decisions. However, the ********* respondents in particular are often hesitant and price-sensitive.”
The willingness to pay for AI features shows an equally stark divide. Just 23 percent of ********* drivers expressed willingness to pay for AI functions, compared to 39 percent of ******** drivers. The study suggests that most users now expect AI features to be standard rather than optional extras.
Dr Nils Schaupensteiner, Associated Partner at MHP and study co-author, said: “Automotive companies need to create innovations with clear added value and develop both direct and indirect monetisation of their AI offerings, for example through data-based business models and improved services.”
In-vehicle AI opportunities
Despite these challenges, traditional automotive manufacturers maintain a trust advantage over tech giants. The study reveals that 64 percent of customers trust established car manufacturers with AI implementation, compared to 50 percent for technology firms like Apple, Google, and Microsoft.
The research identified several key areas where AI could provide significant value across the automotive industry’s value chain, including pattern recognition for quality management, enhanced data management capabilities, AI-driven decision-making systems, and improved customer service through AI-powered communication tools.
“It is worth OEMs and suppliers considering the opportunities offered by the new technology along their entire value chain,” explains Augustin Friedel, Senior Manager and study co-author. “However, the possible uses are diverse and implementation is quite complex.”
The study reveals that while up to 79 percent of respondents express interest in AI-powered features such as driver assistance systems, intelligent route planning, and predictive maintenance, manufacturers face significant challenges in monetising these capabilities, particularly in the ********* market.
See also: MIT breakthrough could transform ****** training
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Western drivers remain sceptical of in-vehicle AI appeared first on AI News.
View the full article
GitHub is marking a significant evolution of its developer tools with two major announcements: a multi-model upgrade to Copilot and the introduction of Spark, a natural language web development platform.
The code hosting giant is expanding Copilot beyond its OpenAI-only architecture to include Anthropic’s Claude 3.5 Sonnet and Google’s Gemini 1.5 Pro models while simultaneously launching GitHub Spark to simplify web application development through natural language instructions.
The expansion includes additional integration news as Copilot arrives on Apple’s Xcode in public preview. Since Copilot’s initial release, this three-pronged update represents GitHub’s most ambitious AI toolkit expansion.
Enhanced model support for Copilot
GitHub Copilot has long leveraged different large language models (LLMs) for various use cases. The first public version of Copilot was launched using Codex, an early version of OpenAI GPT-3, specifically fine-tuned for coding tasks. Copilot Chat was launched in 2023 with GPT-3.5 and later GPT-4.
“Since then, we have updated the base model versions multiple times, using a range from GPT 3.5-turbo to GPT 4o and 4o-mini models for different latency and quality requirements,” Thomas Dohmke, GitHub’s CEO, said in a blog write-up.
GitHub’s decision to extend Copilot’s model support beyond OpenAI’s technology represents a notable evolution in its AI-assisted development strategy. The integration now includes models from industry leaders Anthropic and Google, alongside OpenAI, providing developers access to a broader range of AI capabilities.
The multi-model approach suggests GitHub’s commitment to offering developers more choice and flexibility in their AI-assisted coding journey.
“We are bringing developer choice to GitHub Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini. These new models will be rolling out—first in Copilot Chat, with OpenAI o1-preview and o1-mini available now, Claude 3.5 Sonnet rolling out progressively over the next week, and Google’s Gemini 1.5 Pro in the coming weeks. From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choices across many of GitHub Copilot’s surface areas and functions soon,” the CEO said.
GitHub’s expansion of model support is particularly significant as it allows developers to leverage different AI models’ strengths for various coding tasks. Each AI provider brings its own capabilities and specialisations, which could lead to more accurate and context-aware code suggestions across different programming languages and frameworks.
GitHub Spark: Natural language web development
However, the company’s most intriguing announcement is the introduction of GitHub Spark, a new tool designed to revolutionise web application development and achieve GitHub’s vision of reaching one billion developers. Spark enables developers to build web applications using natural language instructions, potentially reducing the complexity and time required for web development projects.
It all starts with a spark of inspiration Bring your ideas to life and build applications entirely in natural language. Sign up for the technical preview of GitHub Spark: [Hidden Content] pic.twitter.com/gs3AlNlNw6 — GitHub (@github) November 2, 2024
This natural language approach to web development could particularly benefit:
Developers looking to rapidly prototype applications,
Teams seeking to bridge the gap between design concepts and implementation,
Organisations aiming to accelerate their web development lifecycle.
Copilot arrives on Apple’s Xcode
The recent release of Copilot for Apple’s Xcode in public preview form represents another milestone in GitHub’s expansion strategy. The integration brings AI-powered coding assistance to Apple’s integrated development environment, offering several advantages for iOS and macOS developers:
Real-time code suggestions within the Xcode environment,
Improved productivity for Swift and Objective-C development,
Integration with existing Xcode workflows.
Impact on developer productivity
These developments in GitHub’s AI toolkit reflect a broader industry trend towards more intelligent and automated development tools. The combination of expanded model support, natural language web development, and Xcode integration suggest a comprehensive approach to enhancing developer productivity across different platforms and development scenarios.
For development teams, the updates could mean:
Reduced time spent on boilerplate code,
More efficient problem-solving through AI-assisted development,
Greater flexibility in choosing AI models that best suit specific project needs.
Perhaps, as these tools become available to more developers, we may see shifts in how teams approach software development:
Increased adoption of AI-assisted coding practices,
Evolution of development workflows to incorporate natural language instructions,
Greater emphasis on high-level problem-solving rather than implementation details.
Advancements in GitHub’s AI capabilities mark a step forward in the evolution of development tools. While maintaining the fundamental aspects of software development, the new features and integrations aim to enhance rather than replace developer expertise, potentially leading to more efficient and productive development processes.
The combination of expanded AI model support, GitHub Spark’s introduction, and Xcode integration represents GitHub’s holistic approach to modernising software development. As the tools mature and developers incorporate them into their workflows, we may see significant changes in how software is conceived, developed, and deployed.
(Photo by Github)
See also: GitHub CEO: The EU ‘will define how the world regulates AI’
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post GitHub expands AI suite: Copilot goes multi-model, Spark debuts for web dev appeared first on AI News.
View the full article
Google has big hopes for AI, as evidenced by the consistent improvements to its Gemini chatbot in recent months.
Google briefly introduced its vision for a “universal AI agent” aimed to help users with daily tasks at the I/O developer conference in May, hinting that elements of the technology could be incorporated into Gemini soon. Recent insights from The Information have shed more light on its initiative, known internally as Project Jarvis.
Project Jarvis represents a major advancement in AI for Google. Unlike traditional voice assistants that respond to user commands, Jarvis is designed to perform tasks autonomously, navigate the web, and make independent decisions. For instance, Jarvis could manage emails, conduct research, and even schedule appointments, reducing the cognitive load involved in managing digital tasks.
Jarvis’s core objective is to revolutionise how users interact with their devices. Rather than serving as a passive tool awaiting commands, Jarvis would actively engage in real-time task management, positioning it as an AI partner rather than a utility.
For legal professionals, Jarvis could review large volumes of case documents and organise them by relevance, streamlining workflow. Similarly, marketers could use Jarvis to integrate data from numerous sources, allowing them to focus more on strategy and less on administrative work.
The evolution of AI agents such as Jarvis may have an impact on specific job roles. Tasks formerly performed by entry-level administrative personnel may come within the capabilities of AI assistants. However, the shift is likely to generate opportunities in roles that require critical thinking, creativity, and emotional intelligence—qualities not easily replicated by AI.
Industry observers anticipate a shift toward higher-value work, with people concentrating less on routine tasks and more on areas that promote innovation and strategic decision-making.
Privacy and security considerations
Project Jarvis raises significant privacy and security issues due to its ability to access sensitive information such as emails and documents. To prevent unauthorised access, Google will most likely deploy enhanced encryption, strict user restrictions, and, maybe, multi-factor authentication. Cybersecurity will also be essential to keep Jarvis secure from external threats.
Surveys indicate that, while AI holds considerable appeal, privacy ******** a top concern for many users. Experts recommend measures such as a transparent privacy dashboard that enables users to monitor and control Jarvis’s access to data. To build trust and drive the adoption of AI agents like Jarvis, Google will need to strike a balance between convenience and robust privacy protections.
Enhancing user experience and accessibility
Beyond productivity, Jarvis has the potential to improve accessibility for a wide range of users. For those with disabilities, Jarvis could read web content aloud or use voice commands to assist with form navigation. For less tech-savvy users, Jarvis could simplify digital interactions by handling tasks like locating files or managing settings.
Jarvis could also assist in planning a busy workday or booking a trip by actively supporting task management. Project Jarvis aims to reimagine AI as a supportive digital partner, enhancing the user experience beyond that of a conventional tool.
(Photo by Igor Bumba)
See also: Google advances mobile AI in Pixel 9 smartphones
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Project Jarvis ***** reveals Google’s vision for next-gen Gemini appeared first on AI News.
View the full article
Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers.
As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even biological and chemical disciplines significantly increases.
Anthropic warns the next 18 months are critical for policymakers to act, as the window for proactive prevention is narrowing. Notably, Anthropic’s Frontier Red Team highlights how current models can already contribute to various cyber offense-related tasks and expects future models to be even more effective.
Of particular concern is the potential for AI systems to exacerbate chemical, biological, radiological, and nuclear (CBRN) misuse. The *** AI Safety Institute found that several AI models can now match PhD-level human expertise in providing responses to science-related inquiries.
In addressing these risks, Anthropic has detailed its Responsible Scaling Policy (RSP) that was released in September 2023 as a robust countermeasure. RSP mandates an increase in safety and security measures corresponding to the sophistication of AI capabilities.
The RSP framework is designed to be adaptive and iterative, with regular assessments of AI models allowing for timely refinement of safety protocols. Anthropic says that it’s committed to maintaining and enhancing safety spans various team expansions, particularly in security, interpretability, and trust sectors, ensuring readiness for the rigorous safety standards set by its RSP.
Anthropic believes the widespread adoption of RSPs across the AI industry, while primarily voluntary, is essential for addressing AI risks.
Transparent, effective regulation is crucial to reassure society of AI companies’ adherence to promises of safety. Regulatory frameworks, however, must be strategic, incentivising sound safety practices without imposing unnecessary burdens.
Anthropic envisions regulations that are clear, focused, and adaptive to evolving technological landscapes, arguing that these are vital in achieving a balance between risk mitigation and fostering innovation.
In the US, Anthropic suggests that federal legislation could be the ultimate answer to AI risk regulation—though state-driven initiatives might need to step in if federal action lags. Legislative frameworks developed by countries worldwide should allow for standardisation and mutual recognition to support a global AI safety agenda, minimising the cost of regulatory adherence across different regions.
Furthermore, Anthropic addresses scepticism towards imposing regulations—highlighting that overly broad use-case-focused regulations would be inefficient for general AI systems, which have diverse applications. Instead, regulations should target fundamental properties and safety measures of AI models.
While covering broad risks, Anthropic acknowledges that some immediate threats – like deepfakes – aren’t the focus of their current proposals since other initiatives are tackling these nearer-term issues.
Ultimately, Anthropic stresses the importance of instituting regulations that spur innovation rather than stifle it. The initial compliance burden, though inevitable, can be minimised through flexible and carefully-designed safety tests. Proper regulation can even help safeguard both national interests and private sector innovation by securing intellectual property against threats internally and externally.
By focusing on empirically measured risks, Anthropic plans for a regulatory landscape that neither biases against nor favours open or closed-source models. The objective ******** clear: to manage the significant risks of frontier AI models with rigorous but adaptable regulation.
(Image Credit: Anthropic)
See also: President Biden issues first National Security Memorandum on AI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.
View the full article
The Open Source Initiative (OSI) has unveiled a definition framework to evaluate whether AI systems can be classified as open-source.
The announcement of the first Open Source AI Definition (OSAID) was made at All Things Open and marks the culmination of a comprehensive global effort spanning multiple years of research, international workshops, and a year-long community design process.
The OSI – widely recognised as the definitive authority on open-source definitions by individuals, organisations, and government bodies worldwide – developed the framework through extensive collaboration with industry stakeholders. This framework defines what open-source AI means, insisting that the same open-source requirements apply whether to a fully functional AI system, a model, weights and parameters, or other structural elements.
An open-source AI system must be made available under terms that grant four essential freedoms:
Use the system for any purpose and without having to ask for permission.
Study how the system works and inspect its components.
Modify the system for any purpose, including to change its output.
Share the system for others to use with or without modifications, for any purpose.
These freedoms apply both to a fully functional system and to discrete elements of a system. A precondition to exercising these freedoms is having access to the preferred form to make modifications to the system, which includes detailed data information, complete source code, and model parameters.
“The co-design process that led to version 1.0 of the Open Source AI Definition was well-developed, thorough, inclusive, and fair,” said Carlo Piana, OSI board chair. “The board is confident that the process has resulted in a definition that meets the standards of open-source as defined in the open-source definition and the four essential freedoms.”
One of the framework’s most significant requirements is the mandate for open-source models to provide sufficient information about their training data, ensuring that “a skilled person can recreate a substantially equivalent system using the same or similar data,” according to Ayah Bdeir, who leads AI strategy at Mozilla.
Bdeir acknowledged that whilst this approach might not be perfect, it represents a practical compromise between ideological purity and real-world implementation. She suggested that demanding an unrealistically high standard could prove counterproductive to the initiative’s goals.
The Digital Public Goods Alliance (DPGA) has expressed support for the OSI’s leadership in defining open-source AI. Liv Marte Nordhaug, CEO of the DPGA secretariat, confirmed that her organisation will incorporate this foundational work into updates to their Digital Public Goods Standard for AI applications.
EleutherAI Institute, known for its non-profit work in AI development, has also endorsed the definition.
“The Open Source AI Definition is a necessary step towards promoting the benefits of open-source principles in the field of AI,” stated Stella Biderman, Executive Director of the EleutherAI Institute. “We believe that this definition supports the needs of independent machine learning researchers and promotes greater transparency among the largest AI developers.”
The definition highlights the importance of including data information and code when sharing open-source models and weights. These requirements ensure transparency and the ability to modify the AI system.
OSI Executive Director Stefano Maffulli acknowledged the challenges faced during the development process, noting that despite occasional heated exchanges and differing opinions, the final result aligned with the project’s initial objectives.
“This is a starting point for a continued effort to engage with the communities to improve the definition over time,” he stated.
The OSAID does not require a specific legal mechanism for assuring that model parameters are freely available to all, though it may involve licences or legal instruments. This aspect is expected to become clearer over time as the legal system addresses these open-source AI systems.
See also: President Biden issues first National Security Memorandum on AI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Industry leaders back open-source AI definition appeared first on AI News.
View the full article
MIT researchers have developed a ****** training method that reduces time and cost while improving adaptability to new tasks and environments.
The approach – called Heterogeneous Pretrained Transformers (HPT) – combines vast amounts of diverse data from multiple sources into a unified system, effectively creating a shared language that generative AI models can process. This method marks a significant departure from traditional ****** training, where engineers typically collect specific data for individual robots and tasks in controlled environments.
Lead researcher Lirui Wang – an electrical engineering and computer science graduate student at MIT – believes that while many cite insufficient training data as a key challenge in robotics, a ******* issue ***** in the vast array of different domains, modalities, and ****** hardware. Their work demonstrates how to effectively combine and utilise all these diverse elements.
The research team developed an architecture that unifies various data types, including camera images, language instructions, and depth maps. HPT utilises a transformer model, similar to those powering advanced language models, to process visual and proprioceptive inputs.
In practical tests, the system demonstrated remarkable results—outperforming traditional training methods by more than 20 per cent in both simulated and real-world scenarios. This improvement held true even when robots encountered tasks significantly different from their training data.
The researchers assembled an impressive dataset for pretraining, comprising 52 datasets with over 200,000 ****** trajectories across four categories. This approach allows robots to learn from a wealth of experiences, including human demonstrations and simulations.
One of the system’s key innovations ***** in its handling of proprioception (the ******’s awareness of its position and movement.) The team designed the architecture to place equal importance on proprioception and vision, enabling more sophisticated dexterous motions.
Looking ahead, the team aims to enhance HPT’s capabilities to process unlabelled data, similar to advanced language models. Their ultimate vision involves creating a universal ****** brain that could be downloaded and used for any ****** without additional training.
While acknowledging they are in the early stages, the team ******** optimistic that scaling could lead to breakthrough developments in robotic policies, similar to the advances seen in large language models.
You can find a copy of the researchers’ paper here (PDF)
(Photo by Possessed Photography)
See also: Jailbreaking AI robots: Researchers sound alarm over security flaws
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post MIT breakthrough could transform ****** training appeared first on AI News.
View the full article
President Biden has issued the US’ first-ever National Security Memorandum (NSM) on AI, addressing how the nation approaches the technology from a security perspective.
The memorandum, which builds upon Biden’s earlier executive order on AI, is founded on the premise that cutting-edge AI developments will substantially impact national security and foreign policy in the immediate future.
Security experts suggest the implications are already being felt. “AI already has implications for national security, as we know that more and more attackers are using AI to create higher volume and more complex attacks, especially in the social engineering and misinformation fronts,” says Melissa Ruzzi, Director of AI at AppOmni.
At its core, the NSM outlines three primary objectives: establishing US leadership in safe AI development, leveraging AI technologies for national security, and fostering international governance frameworks.
“Our competitors want to upend US AI leadership and have employed economic and technological espionage in efforts to steal US technology,” the memorandum states, elevating the protection of ********* AI innovations to a “top-tier intelligence priority.”
The document formally designates the AI Safety Institute as the primary governmental point of contact for the AI industry. This institute will be staffed with technical experts and will maintain close partnerships with national security agencies, including the intelligence community, Department of Defence, and Department of Energy.
“The actions listed in the memo are great starting points to get a good picture of the status quo and obtain enough information to make decisions based on data, instead of jumping to conclusions to make decisions based on vague assumptions,” Ruzzi explains.
However, Ruzzi cautions that “the data that needs to be collected on the actions is not trivial, and even with the data, assumptions and trade-offs will be necessary for final decision making. Making decisions after data gathering is where the big challenge will be.”
In a notable move to democratise AI research, the memorandum reinforces support for the National AI Research Resource pilot programme. This initiative aims to extend AI research capabilities beyond major tech firms to universities, civil society organisations, and small businesses.
The NSM introduces the Framework to Advance AI Governance and Risk Management in National Security (PDF), which establishes comprehensive guidelines for implementing AI in national security applications. These guidelines mandate rigorous risk assessment procedures and safeguards against privacy invasions, bias, discrimination, and human rights violations.
Security considerations feature prominently in the framework, with Ruzzi emphasising their importance: “Cybersecurity of AI is crucial – we know that if AI is misconfigured, it can pose risks similar to misconfigurations in SaaS applications that cause confidential data to be exposed.”
On the international front, the memorandum builds upon recent diplomatic achievements, including the G7’s International Code of Conduct on AI and agreements reached at the Bletchley and Seoul AI Safety Summits. Notably, 56 nations have endorsed the US-led Political Declaration on the Military Use of AI and Autonomy.
The Biden administration has also secured a diplomatic victory with the passage of the first UN General Assembly Resolution on AI, which garnered unanimous support, including co-sponsorship from China.
The memorandum emphasises the critical role of semiconductor manufacturing in AI development, connecting to Biden’s earlier CHIPS Act. It directs actions to enhance chip supply chain security and diversity, ensuring ********* leadership in advanced computing infrastructure.
This latest initiative forms part of the Biden-Harris Administration’s broader strategy for responsible innovation in the AI sector, reinforcing America’s commitment to maintaining technological leadership while upholding democratic values and human rights.
(Photo by Nils Huenerfuerst)
See also: EU AI Act: Early prep could give businesses competitive edge
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post President Biden issues first National Security Memorandum on AI appeared first on AI News.
View the full article
Influencer partnerships can be great for brands looking to pump out content that promotes their products and services in an authentic way. These types of engagements can yield significant brand awareness and brand sentiment lift, but they can be risky too. Social media stars are unpredictable at the best of times, with many deliberately chasing controversy to increase their fame.
These antics don’t always reflect well on the brands that collaborate with especially attention-hungry influencers, leaving marketers no choice but to conduct careful due diligence on the individuals they work with. Luckily, that task can be made much easier thanks to the evolving utility of AI.
Lightricks, a software company best known for its AI-powered video and image editing tools, is once again expanding the AI capabilities of its suite with this week’s announcement of SafeCollab. An AI-powered influencer vetting module that lives within the company’s Popular Pays creator collaboration platform, SafeCollab is a new tool for marketers that automates the vetting process.
Traditionally, marketers have had no choice but to spend hours researching the backgrounds of influencers, looking through years’ worth of video uploads and social media posts. It’s a lengthy, manual process that can only be automated with intelligent tools.
SafeCollab provides that intelligence with its underlying large language models, which do the job of investigating influencers to ensure the image they portray is consistent with brand values. The LLMs perform what amounts to a risk assessment of creators’ content across multiple social media channels in minutes, searching through hours of videos, audio uploads, images and text.
In doing this, SafeCollab significantly reduces the time it takes for brand marketers to perform due diligence on the social media influencers they’re considering partnering with. Likewise, when creators opt in to SafeCollab, they make it easier for marketers to understand the brand safety implications of working together, reducing friction from campaign lifecycles.
Brands can’t take chances
The idea here is to empower brand marketers to avoid working with creators whose content is not aligned with the brand’s values – as well as those who have a tendency to kick up a storm.
Such due diligence is vital, for even the most innocuous influencers can have some skeletons in their closets. A case in point is the popular lifestyle influencer Brooke Schofield, who has more than 2.2 million followers on TikTok and co-hosts the “Canceled” podcast on YouTube. With her large following, good looks and keen sense of fashion, Schofield looked like a great fit for the clothing brand Boys Lie, which collaborated with her on an exclusive capsule collection called “Bless His Heart.”
However, Boys Lie quickly came to regret its collaboration with Schofield when a scandal erupted in April after fans unearthed a number of years-old social media posts where she expressed ******* views.
The posts, which were uploaded on X between 2012 and 2015 when Schofield was a teenager, contained a string of ******* profanities and insulting jokes about ****** people’s hairstyles. In one post, she vigorously defended George Zimmerman, a white ********* who was controversially acquitted of the ******* of the ****** teenager Trayvon Martin.
Schofield apologized profusely for her posts, admitting that they were “very hurtful” while stressing that she’s a changed person, having had time to “learn and grow and formulate my own opinions.”
However, Boys Lie decided it had no option but to drop its association with Schofield. After a statement on Instagram saying it’s “working on a solution,” the company followed by quietly withdrawing the clothing collection they had previously collaborated on.
Accelerating due diligence
If the marketing team at Boys Lie had access to a tool like SafeCollab, they likely would have uncovered Schofield’s controversial posts long before commissioning the collaboration. The tool, which is a part of Lightricks’ influencer marketing platform Popular Pays, is all about helping brands to automate their due diligence processes when working with social media creators.
By analyzing years of creators’ histories of posts across platforms like Instagram, TikTok, and YouTube, it can check everything they’ve posted online to make sure there’s nothing that might reflect badly on a brand.
Brands can define their risk parameters, and the tool will quickly generate an accurate risk assessment evaluation, so they can confidently choose the influencers they want to work with, safe in the knowledge that their partnerships are unlikely to spark any backlash.
Without a platform like SafeCollab, the task of performing all of this due diligence falls on the shoulders of marketers, and that means spending hours trawling through each influencer’s profiles, checking everything and anything they’ve ever said or done to ensure there’s nothing in their past that the brand would rather not be associated with.
When we consider that the scope of work might include audio voiceovers, extensive comment threads and frame-by-frame analyses of video content, it’s a painstaking process that never really ends. After all, the top influencers have a habit of churning out fresh content every day. Careful marketers have no choice but to continuously monitor what they’re posting.
Beyond initial history scans, SafeCollab’s real-time monitoring algorithms assume full responsibility, generating instant alerts to any problematic content, such as posts that contain graphic language, inappropriate images, promote ********* or ***** and alcohol use, mention *********, or whatever else the brand deems to be unsavory.
AI’s expanding applications
With the launch of SafeCollab, Lightricks is demonstrating yet another use case for generative AI. The company first made a name for itself as a developer of AI-powered video and image editing apps, including Photoleap, Facetune and Videoleap.
The latter app incorporates AI-powered video filters and text-to-video generative AI functionalities. It also boasts an AI Effects feature, where users can apply specialized AI art styles to achieve the desired vibe for each video they create.
Lightricks is also the company behind LTX Studio, which is a comprehensive platform that helps advertising production firms and filmmakers to create storyboards and asset-rich pitch decks for their video projects using text-to-video generative AI.
With all of Lightricks’ AI apps, the primary benefit is that they save users time by automating manual work and bringing creative visions to life, and SafeCollab is a great example of that. By automating the due diligence process from start to finish, marketers can quickly identify controversial influencers they’d rather steer clear of, without spending hours conducting exhaustive research.
The post AI is helping brands avoid controversial influencer partnerships appeared first on AI News.
View the full article
ByteDance, the creator of TikTok, recently experienced a security breach involving an intern who allegedly sabotaged AI model training. The incident, reported on WeChat, raised concerns about the company’s security protocols in its AI department.
In response, ByteDance clarified that while the intern disrupted AI commercialisation efforts, no online operations or commercial projects were affected. According to the company, rumours that over 8,000 GPU cards were affected and that the breach resulted in millions of dollars in losses are taken out of proportion.
The real issue here goes beyond one rogue intern—it highlights the need for stricter security measures in tech companies, especially when interns are entrusted with key responsibilities. Even minor mistakes in high-pressure environments can have serious consequences.
On investigating, ByteDance found that the intern, a doctoral student, was part of the commercialisation tech team, not the AI Lab. The individual was dismissed in August.
According to the local media outlet Jiemian, the intern became frustrated with resource allocation and retaliated by exploiting a vulnerability in the AI development platform Hugging Face. This led to disruptions in model training, though ByteDance’s commercial Doubao model was not affected.
Despite the disruption, ByteDance’s automated machine learning (AML) team initially struggled to identify the cause. Fortunately, the ******* only impacted internal models, minimising broader damage.
As context, China’s AI market, estimated to be worth $250 billion in 2023, is rapidly increasing in size, with industry leaders such as Baidu AI Cloud, SenseRobot, and Zhipu AI driving innovation. However, incidents like this one pose a huge risk to the commercialisation of AI technology, as model accuracy and reliability are directly related to business success.
The situation also raises questions about intern management in tech companies. Interns often play crucial roles in fast-paced environments, but without proper oversight and security protocols, their roles can pose risks. Companies must ensure that interns receive adequate training and supervision to prevent unintentional or malicious actions that could disrupt operations.
Implications for AI commercialisation
The security breach highlights the possible risks to AI commercialisation. A disruption in AI model training, such as this one, can cause delays in product releases, loss of client trust, and even financial losses. For a company like ByteDance, where AI drives core functionalities, these kinds of incidents are particularly damaging.
The issue emphasises the importance of ethical AI development and business responsibility. Companies must not only develop cutting-edge AI technology, but also ensure their security and operate responsible management. Transparency and accountability are critical for retaining trust in an era when AI plays an important role in business operations.
(Photo by Jonathan Kemper)
See also: Microsoft gains major AI client as TikTok spends $20 million monthly
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Intern allegedly sabotages ByteDance AI project, leading to dismissal appeared first on AI News.
View the full article
A comprehensive AI sector study – conducted by the Department for Science, Innovation and Technology (DSIT) in collaboration with Perspective Economics, Ipsos, and glass.ai – provides a detailed overview of the industry’s current state and its future prospects.
In this article, we delve deeper into the key findings and implications—drawing on additional sources to enhance our understanding.
Thriving industry with significant growth
The study highlights the remarkable growth of the ***’s AI sector. With over 3,170 active AI companies, these firms have generated £10.6 billion in AI-related revenues and employed more than 50,000 people in AI-related roles. This significant contribution to GVA (****** Value Added) underscores the sector’s transformative potential in driving the ***’s economic growth.
Mark Boost, CEO of Civo, said: “In a space that’s been dominated by US companies for too long, it’s promising to see the government now stepping up to help support the *** AI sector on the global stage.”
The study shows that AI activity is dispersed across various regions of the ***, with notable concentrations in London, the South East, and Scotland. This regional dispersion indicates a broad scope for the development of AI technology applications across different sectors and regions.
Investment and funding
Investment in the AI sector has been a key driver of growth. In 2022, £18.8 billion was secured in private investment since 2016, with investments made in 52 unique industry sectors compared to 35 sectors in 2016.
The government’s commitment to supporting AI is evident through significant investments. In 2022, the *** government unveiled a National AI Strategy and Action Plan—committing over £1.3 billion in support for the sector, complementing the £2.8 billion already invested.
However, as Boost cautions, “Major players like AWS are locking AI startups into their ecosystems with offerings like $500k cloud credits, ensuring that emerging companies start their journey reliant on their infrastructure. This not only hinders competition and promotes vendor lock-in but also risks stifling innovation across the broader *** AI ecosystem.”
Addressing bottlenecks
Despite the growth and investment, several bottlenecks must be addressed to fully harness the potential of AI:
Infrastructure: The ***’s digital technology infrastructure is less advanced than many other countries. This bottleneck includes inadequate data centre infrastructure and a dependent supply of powerful GPU computer chips. Boost emphasises this concern, stating “It would be dangerous for the government to ignore the immense compute power that AI relies on. We need to consider where this power is coming from and the impact it’s having on both the already over-concentrated cloud market and the environment.”
Commercial awareness: Many SMEs lack familiarity with digital technology. Almost a third (31%) of SMEs have yet to adopt the cloud, and nearly half (47%) do not currently use AI tools or applications.
Skills shortage: Two-fifths of businesses struggle to find staff with good digital skills, including traditional digital roles like data analytics or IT. There is a rising need for workers with new AI-specific skills, such as prompt engineering, that will require retraining and upskilling opportunities.
To address these bottlenecks, the government has implemented several initiatives:
Private sector investment: Microsoft has announced a £2.5 billion investment in AI skills, security, and data centre infrastructure, aiming to procure more than 20,000 of the most advanced GPUs by 2026.
Government support: The government has invested £1.5 billion in computing capacity and committed to building three new supercomputers by 2025. This support aims to enhance the ***’s infrastructure to stay competitive in the AI market.
Public sector integration: The *** Government Digital Service (GDS) is working to improve efficiency using predictive algorithms for future pension scheme behaviour. HMRC uses AI to help identify call centre priorities, demonstrating how AI solutions can address complex public sector challenges.
Future prospects and challenges
The future of the *** AI sector is both promising and challenging. While significant economic gains are predicted, including boosting GDP by £550 billion by 2035, delays in AI roll-out could cost the *** £150 billion over the same *******. Ensuring a balanced approach between innovation and regulation will be crucial.
Boost emphasises the importance of data sovereignty and privacy: “Businesses have grown increasingly wary of how their data is collected, stored, and used by the likes of ChatGPT. The government has a real opportunity to enable the *** AI sector to offer viable alternatives.
“The forthcoming AI Action Plan will be another opportunity to identify how AI can drive economic growth and better support the *** tech sector.”
AI Safety Summit: The AI Safety Summit at Bletchley Park highlighted the need for responsible AI development. The “Bletchley Declaration on AI Safety” emphasises the importance of ensuring AI tools are transparent, fair, and free from bias to maintain public trust and realise AI’s benefits in public services.
Cybersecurity challenges: As AI systems handle sensitive or personal information, ensuring their security is paramount. This involves protecting against cyber threats, securing algorithms from manipulation, safeguarding data centres and hardware, and ensuring supply chain security.
The AI sector study underscores a thriving industry with significant growth potential. However, it also highlights several bottlenecks that must be addressed – infrastructure gaps, lack of commercial awareness, and skills shortages – to fully harness the sector’s potential.
(Photo by John *******)
See also: EU AI Act: Early prep could give businesses competitive edge
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post AI sector study: Record growth masks serious challenges appeared first on AI News.
View the full article
Penguin Random House (PRH) has taken a significant step in response to rising concerns about the use of intellectual property to train AI systems.
The publisher has introduced a new statement to the copyright pages of both new and reprinted books, stating, “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems.” This change is supplemented by a section that excludes PRH’s works from the ********* Union’s text and data mining exception, in accordance with applicable copyright laws.
As one of the first major publishers to address the issue of AI training explicitly, PRH is responding to the broader debate about how tech companies use copyrighted content to train large language models (LLMs), like those used in chatbots and other AI tools. Publishers have become increasingly concerned about the possible misuse of their intellectual property in recent years, especially after reports arose that copyrighted books were utilised by AI firms to enhance these technologies.
PRH’s move to amend its copyright page is an attempt to protect its content ahead of time, even though such comments have no bearing on the legal framework of copyright. The clauses work similarly to a “robots.txt” file, which websites employ to request that their content not be scraped by bots or AI systems. While these notices indicate the publisher’s intent, they are not legally binding, and existing copyright protections apply in the absence of such disclaimers.
PRH’s move also emphasises the ongoing tension between content creators and the AI industry, as more authors, publishers, and other creatives ask for stronger protections. The Authors’ Licensing and Collecting Society (ALCS) has been outspoken in its support for PRH’s actions. ALCS CEO Barbara Hayes expressed approval of the updated copyright language, emphasising the need for publishers to protect their works from unauthorised use in AI training.
However, some contend that simply changing copyright pages may not be enough. The Society of Authors (SoA) applauds PRH’s efforts, but believes more needs to be done to guarantee that authors’ rights are properly protected. SoA CEO Anna Ganley has called on publishers to go beyond these statements and incorporate explicit protections in author contracts, making sure that writers are informed before their work is used in AI-related initiatives.
As AI advances, the debate over its usage of copyrighted content ******** far from over. PRH’s action could herald a larger shift in the publishing sector, but how other publishers and the legal system react ******** to be seen.
(Image by StockSnap)
See also: AI governance gap: 95% of firms haven’t implemented frameworks
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Penguin Random House protects its books from AI training use appeared first on AI News.
View the full article
Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.