Discover how chatbots for marketing can boost your ROI with enhanced engagement and instant customer responses.
What are chatbots?
Chatbots are automated software applications designed to simulate human conversation. They interact with users through text or voice, providing immediate responses and performing various tasks. AI chatbots can understand and process natural language, enabling them to handle complex queries and provide relevant information or services.
Chatbots come in various forms, including:
Rule-based chatbots: Respond to specific commands predetermined by developers,
AI-driven chatbots: Use machine learning and natural language processing (NLP) to understand and adapt to user queries.
The importance of chatbots in marketing
Chatbots have become an essential component in modern marketing strategies. They offer marketers a way to connect with consumers efficiently, enhance customer experience, and streamline interactions.
Enhanced customer engagement: Chatbots engage customers by providing instant responses and personalised interactions,
24/7 availability: They operate around the clock, ensuring customer inquiries are addressed at any time,
Cost-effectiveness: Reduce the need for extensive human customer support, lowering operational costs,
Data collection: Gather valuable customer data and insights for better marketing strategies,
Increased lead generation: Facilitate the collection of leads by interacting with potential customers and capturing their information.
Chatbots play a crucial role in improving customer engagement. They provide a dynamic and interactive way for businesses to communicate with their audience, fostering stronger relationships and increasing satisfaction. Chatbots can quickly address common questions, offer recommendations, and guide customers through the purchasing process, creating a more personalised experience.
One of the most valuable features of chatbots is their ability to operate around the clock. Unlike human support teams, chatbots are available 24/7, ensuring customers receive immediate assistance regardless of the time of day. Continuous availability can significantly enhance the customer experience, leading to higher levels of satisfaction and loyalty.
The implementation of chatbots in marketing strategies not only streamlines operations but also delivers a more consistent and fulfilling customer experience. The result is an increase in engagement, satisfaction, and overall effectiveness in marketing campaigns.
Implementing chatbots in marketing strategies
Integrating chatbots into marketing strategies can significantly enhance customer engagement and streamline overall marketing efforts. This section delves into how chatbots can be used for personalised customer interactions and how they facilitate data collection and analysis.
Chatbots offer a unique opportunity to create personalised interactions with customers. By using AI and machine learning, chatbots can tailor responses based on user behaviours, preferences, and past interactions. The personalised approach can make customers feel valued and understood, increasing their satisfaction and loyalty.
Key aspects of personalised customer interactions:
Greeting users by name: Addressing customers by their names makes interactions feel more personal and engaging,
Tailored recommendations: Based on previous interactions, chatbots can recommend products, content, or services that align with individual preferences,
Customising responses: Chatbots can adjust their responses based on the user’s mood, choices, and patterns.
Chatbots are important in gathering and analysing customer data. The interactions between users and chatbots generate valuable insights that can be used to optimise marketing strategies. Collecting this data can help businesses understand customer needs, behaviour patterns, and preferences.
Key areas where chatbots aid in data collection and analysis:
User interaction history: Chatbots store conversation logs, providing insights into common customer queries and issues,
Demographic data: Collecting information like age, location, and interests helps in segmenting the audience for targeted marketing,
Feedback mechanisms: Gathering feedback directly through chatbot interactions allows businesses to gauge customer satisfaction and areas for improvement.
In the realm of marketing, using chatbots can significantly boost return on investment (ROI). Two of the primary factors contributing to this increase are cost-effectiveness and enhanced lead generation and conversions.
Chatbots offer substantial cost savings for businesses. By automating customer interactions, businesses can reduce the need for a large customer service team. This not only lowers operational costs but also streamlines processes. The initial investment in chatbot technology is often offset by the long-term savings achieved through decreased labour costs and increased efficiency.
Also, chatbots can play a critical role in lead generation and conversion rates. By engaging users in real-time, chatbots can qualify leads, provide personalised recommendations, and guide users through the sales funnel. Immediate interaction can improve user experience and lead to higher conversion rates.
Utilising chatbots for marketing can lead to higher efficiency, greater customer interaction, and ultimately a better ROI. By understanding and implementing these technologies, businesses can enhance their marketing strategies and achieve substantial financial benefits.
The post Boost your ROI: The impact of chatbots on marketing appeared first on AI News.
View the full article
AI is rapidly becoming ubiquitous across business systems and IT ecosystems, with adoption and development racing faster than anyone could have expected. Today it seems that everywhere we turn, software engineers are building custom models and integrating AI into their products, as business leaders incorporate AI-powered solutions in their working environments.
However, uncertainty about the best way to implement AI is stopping some companies from taking action. Boston Consulting Group’s latest Digital Acceleration Index (DAI), a global survey of 2,700 executives, revealed that only 28% say their organisation is fully prepared for new AI regulation.
Their uncertainty is exacerbated by AI regulations arriving thick and fast: the EU AI act is on the way; Argentina released a draft AI plan; Canada has the AI and Data Act; China has enacted a slew of AI regulations; and the G7 nations launched the “Hiroshima AI process.” Guidelines abound, with the OECD developing AI principles, the UN proposing a new UN AI advisory body, and the Biden administration releasing a blueprint for an AI Bill of Rights (although that could quickly change with the second Trump administration).
Legislation is also coming in individual US states, and is appearing in many industry frameworks. To date, 21 states have enacted laws to regulate AI use in some manner, including the Colourado AI Act, and clauses in California’s CCPA, plus a further 14 states have legislation awaiting approval.
Meanwhile, there are loud voices on both sides of the AI regulation debate. A new survey from SolarWinds shows 88% of IT professionals advocate for stronger regulation, and separate research reveals that 91% of British people want the government to do more to hold businesses accountable for their AI systems. On the other hand, the leaders of over 50 tech companies recently wrote an open letter calling for urgent reform of the EU’s heavy AI regulations, arguing that they stifle innovation.
It’s certainly a tricky ******* for business leaders and software developers, as regulators scramble to catch up with tech. Of course you want to take advantage of the benefits AI can provide, you can do so in a way that sets you up for compliance with whatever regulatory requirements are coming, and don’t handicap your AI use unnecessarily while your rivals speed ahead.
We don’t have a crystal ball, so we can’t predict the future. But we can share some best practices for setting up systems and procedures that will prepare the ground for AI regulatory compliance.
Map out AI usage in your wider ecosystem
You can’t manage your team’s AI use unless you know about it, but that alone can be a significant challenge. Shadow IT is already the scourge of cybersecurity teams: Employees sign up for SaaS tools without the knowledge of IT departments, leaving an unknown number of solutions and platforms with access to business data and/or systems.
Now security teams also have to grapple with shadow AI. Many apps, chatbots, and other tools incorporate AI, machine learning (ML), or natural language programming (NLP), without such solutions necessarily being obvious AI solutions. When employees log into these solutions without official approval, they bring AI into your systems without your knowledge.
As Opice Blum’s data privacy expert Henrique Fabretti Moraes explained, “Mapping the tools in use – or those intended for use – is crucial for understanding and fine-tuning acceptable use policies and potential mitigation measures to decrease the risks involved in their utilisation.”
Some regulations hold you responsible for AI use by vendors. To take full control of the situation, you need to map all the AI in your, and your partner organisations’ environments. In this regard, using a tool like Harmonic can be instrumental in detecting AI use across the supply chain.
Verify data governance
Data privacy and security are core concerns for all AI regulations, both those already in place and those on the brink of approval.
Your AI use already needs to comply with existing privacy laws like GDPR and CCPR, which require you to know what data your AI can access and what it does with the data, and for you to demonstrate guardrails to protect the data AI uses.
To ensure compliance, you need to put robust data governance rules into place in your organisation, managed by a defined team, and backed up by regular audits. Your policies should include due diligence to evaluate data security and sources of all your tools, including those that use AI, to identify areas of potential bias and privacy risk.
“It is incumbent on organisations to take proactive measures by enhancing data hygiene, enforcing robust AI ethics and assembling the right teams to lead these efforts,” said Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds. “This proactive stance not only helps with compliance with evolving regulations but also maximises the potential of AI.”
Establish continuous monitoring for your AI systems
Effective monitoring is crucial for managing any area of your business. When it comes to AI, as with other areas of cybersecurity, you need continuous monitoring to ensure that you know what your AI tools are doing, how they are behaving, and what data they are accessing. You also need to audit them regularly to keep on top of AI use in your organisation.
“The idea of using AI to monitor and regulate other AI systems is a crucial development in ensuring these systems are both effective and ethical,” said Cache Merrill, founder of software development company Zibtek. “Currently, techniques like machine learning models that predict other models’ behaviours (meta-models) are employed to monitor AI. The systems analyse patterns and outputs of operational AI to detect anomalies, biases or potential failures before they become critical.”
Cyber GRC automation platform Cypago allows you to run continuous monitoring and regulatory audit evidence collection in the background. The no-code automation allows you to set custom workflow capabilities without technical expertise, so alerts and mitigation actions are triggered instantly according to the controls and thresholds you set up.
Cypago can connect with your various digital platforms, synchronise with virtually any regulatory framework, and turn all relevant controls into automated workflows. Once your integrations and regulatory frameworks are set up, creating custom workflows on the platform is as simple as uploading a spreadsheet.
Use risk assessments as your guidelines
It’s vital to know which of your AI tools are high risk, medium risk, and low risk – for compliance with external regulations, for internal business risk management, and for improving software development workflows. High risk use cases will need more safeguards and evaluation before deployment.
“While AI risk management can be started at any point in the project development,” Ayesha Gulley, an AI policy expert from Holistic AI, said. “Implementing a risk management framework sooner than later can help enterprises increase trust and scale with confidence.”
When you know the risks posed by different AI solutions, you can choose the level of access you’ll grant them to data and critical business systems.
In terms of regulations, the EU AI Act already distinguishes between AI systems with different risk levels, and NIST recommends assessing AI tools based on trustworthiness, social impact, and how humans interact with the system.
Proactively set AI ethics governance
You don’t need to wait for AI regulations to set up ethical AI policies. Allocate responsibility for ethical AI considerations, put together teams, and draw up policies for ethical AI use that include cybersecurity, model validation, transparency, data privacy, and incident reporting.
Plenty of existing frameworks like NIST’s AI RMF and ISO/IEC 42001 recommend AI best practices that you can incorporate into your policies.
“Regulating AI is both necessary and inevitable to ensure ethical and responsible use. While this may introduce complexities, it need not hinder innovation,” said Arik Solomon, CEO and co-founder of Cypago. “By integrating compliance into their internal frameworks and developing policies and processes aligned with regulatory principles, companies in regulated industries can continue to grow and innovate effectively.”
Companies that can demonstrate a proactive approach to ethical AI will be better positioned for compliance. AI regulations aim to ensure transparency and data privacy, so if your goals align with these principles, you’ll be more likely to have policies in place that comply with future regulation. The FairNow platform can help with this process, with tools for managing AI governance, bias checks, and risk assessments in a single location.
Don’t let ***** of AI regulation hold you back
AI regulations are still evolving and emerging, creating uncertainty for businesses and developers. But don’t let the fluid situation stop you from benefiting from AI. By proactively implementing policies, workflows, and tools that align with the principles of data privacy, transparency, and ethical use, you can prepare for AI regulations and take advantage of AI-powered possibilities.
The post Preparing today for tomorrow’s AI regulations appeared first on AI News.
View the full article
Business Insider’s “CXO AI Playbook” looks at how firms are utilising AI to tackle challenges, scale operations, and plan for the future.
The Playbook looks at stories from various industries to see what problems AI is solving, who’s driving these initiatives, and how it’s reshaping strategies.
Salesforce, well known for its CRM software used by over 150,000 companies like Amazon and Walmart, is no stranger to innovation. It also owns Slack, the popular workplace communication app. Salesforce is now stepping up its AI game with Agentforce, a platform that lets businesses to build and deploy digital agents to automate tasks such as creating sales reports and summarising Slack conversations.
What problem is it solving?
Salesforce has been working with AI for years. In 2016, it launched Einstein, an AI feature baked into its CRM platform. Einstein handled basic scriptable tasks, but the rise of generative AI brought a chance to do more. Smarter tools could now make better decisions and understand natural language.
This sparked a transformation. First came Einstein GPT, then Einstein Copilot, and now Agentforce—a platform designed for flexibility with prebuilt and customisable agents to handle diverse business needs.
“Our customers wanted more. Some wanted to tweak the agents we offer, while others wanted to create their own,” said Tyler Carlson, Salesforce’s VP of Business Development.
The tech behind it
Agentforce is powered by Salesforce’s Atlas Reasoning Engine, developed in-house. The platform connects with AI models from major players like OpenAI, Anthropic, Amazon, and Google, giving businesses access to a variety of tools.
Slack has become a testing ground for these AI agents. Currently in beta, Agentforce’s Slack integration puts automations where employees already spend their time. “Slack makes these tools easy to use and accessible,” Carlson added.
Smarter, more flexible AI
Agentforce uses ReAct prompting, a technique that helps agents break down problems into smaller steps and adjust their approach as they go. This leads to more accurate responses and hands-off task management, from answering questions to scheduling meetings.
Agentforce works with Salesforce’s proprietary LLMs and third-party models, giving clients plenty of options. To ensure security, Salesforce enforces strict data privacy policies, including limits on data retention.
Making it work for businesses
With tools like Agentbuilder, companies can design AI agents tailored to their needs. For example, an agent could sort emails or answer specific HR questions using internal data. One example is Salesforce’s collaboration with Workday to create an AI service agent for employee queries.
Salesforce is already seeing results, with Agentforce resolving 90% of customer inquiries in early trials. The goal? Broader adoption, more capabilities, and higher workloads handled by these agents.
“We’re building a ******* ecosystem of partners and skills,” Carlson said. “By next year, we want Agentforce to be a must-have for businesses.”
(Photo by Unsplash)
See also: Paul O’Sullivan, Salesforce: Transforming work in the GenAI era
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Salesforce launches AI platform for automated task management appeared first on AI News.
View the full article
A report by Publicis Sapient sheds light on the disparities between the C-suite and practitioners, dubbed the “V-suite,” in their perceptions and adoption of generative AI.
The report reveals a stark contrast in how the C-suite and V-suite view the potential of generative AI. While the C-suite focuses on visible use cases such as customer experience, service, and sales, the V-suite sees opportunities across various functional areas, including operations, HR, and finance.
Risk perception
The divide extends to risk perception as well. Fifty-one percent of C-level respondents expressed more concern about the risk and ethics of generative AI than other emerging technologies. In contrast, only 23 percent of the V-suite shared these worries.
Simon James, Managing Director of Data & AI at Publicis Sapient, said: “It’s likely the C-suite is more worried about abstract, big-picture dangers – such as Hollywood-style scenarios of a rapidly-evolving superintelligence – than the V-suite.”
The report also highlights the uncertainty surrounding generative AI maturity. Organisations can be at various stages of maturity simultaneously, with many struggling to define what success looks like. More than two-thirds of respondents lack a way to measure the success of their generative AI projects.
Navigating the generative AI landscape
Despite the C-suite’s focus on high-visibility use cases, generative AI is quietly transforming back-office functions. More than half of the V-suite respondents ranked generative AI as extremely important in areas like finance and operations over the next three years, compared to a smaller percentage of the C-suite.
To harness the full potential of generative AI, the report recommends a portfolio approach to innovation projects. Leaders should focus on delivering projects, controlling shadow IT, avoiding duplication, empowering domain experts, connecting business units with the CIO’s office, and engaging the risk office early and often.
Daniel Liebermann, Managing Director at Publicis Sapient, commented: “It’s as hard for leaders to learn how individuals within their organisation are using ChatGPT or Microsoft Copilot as it is to understand how they’re using the internet.”
The path forward
The report concludes with five steps to maximise innovation: adopting a portfolio approach, improving communication between the CIO’s office and the risk office, seeking out innovators within the organisation, using generative AI to manage information, and empowering team members through company culture and upskilling.
As generative AI continues to evolve, organisations must bridge the gap between the C-suite and V-suite to unlock its full potential. The future of business transformation ***** in harnessing the power of a decentralised, bottom-up approach to innovation.
See also: EU introduces draft regulatory guidance for AI models
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Generative AI: Disparities between C-suite and practitioners appeared first on AI News.
View the full article
The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models.
The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each addressing specific aspects of AI governance and risk mitigation:
Working Group 1: Transparency and copyright-related rules
Working Group 2: Risk identification and assessment for systemic risk
Working Group 3: Technical risk mitigation for systemic risk
Working Group 4: Governance risk mitigation for systemic risk
The draft is aligned with existing laws such as the Charter of Fundamental Rights of the ********* Union. It takes into account international approaches, striving for proportionality to risks, and aims to be future-proof by contemplating rapid technological changes.
Key objectives outlined in the draft include:
Clarifying compliance methods for providers of general-purpose AI models
Facilitating understanding across the AI value chain, ensuring seamless integration of AI models into downstream products
Ensuring compliance with Union law on copyrights, especially concerning the use of copyrighted material for model training
Continuously assessing and mitigating systemic risks associated with AI models
Recognising and mitigating systemic risks
A core feature of the draft is its taxonomy of systemic risks, which includes types, natures, and sources of such risks. The document outlines various threats such as cyber offences, biological risks, loss of control over autonomous AI models, and large-scale disinformation. By acknowledging the continuously evolving nature of AI technology, the draft recognises that this taxonomy will need updates to remain relevant.
As AI models with systemic risks become more common, the draft emphasises the need for robust safety and security frameworks (SSFs). It proposes a hierarchy of measures, sub-measures, and key performance indicators (KPIs) to ensure appropriate risk identification, analysis, and mitigation throughout a model’s lifecycle.
The draft suggests that providers establish processes to identify and report serious incidents associated with their AI models, offering detailed assessments and corrections as needed. It also encourages collaboration with independent experts for risk assessment, especially for models posing significant systemic risks.
Taking a proactive stance to AI regulatory guidance
The EU AI Act, which came into force on 1 August 2024, mandates that the final version of this Code be ready by 1 May 2025. This initiative underscores the EU’s proactive stance towards AI regulation, emphasising the need for AI safety, transparency, and accountability.
As the draft continues to evolve, the working groups invite stakeholders to participate actively in refining the document. Their collaborative input will shape a regulatory framework aimed at safeguarding innovation while protecting society from the potential pitfalls of AI technology.
While still in draft form, the EU’s Code of Practice for general-purpose AI models could set a benchmark for responsible AI development and deployment globally. By addressing key issues such as transparency, risk management, and copyright compliance, the Code aims to create a regulatory environment that fosters innovation, upholds fundamental rights, and ensures a high level of consumer protection.
This draft is open for written feedback until 28 November 2024.
See also: Anthropic urges AI regulation to avoid catastrophes
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post EU introduces draft regulatory guidance for AI models appeared first on AI News.
View the full article
Microsoft Edge has evolved into more than simply a browser; it is a critical component of Microsoft’s ecosystem, meant to integrate smoothly with Windows and highlight the company’s latest innovations, such as its AI assistant, Copilot.
While these interconnections make Edge a viable choice, Microsoft’s methods for persuading consumers to choose it have been far from covert.
From default settings that prioritise Edge to persistent prompts at startup, Microsoft has made it clear they want Edge to be the go-to for Windows users. And lately, it’s upped the ante: now, Edge can launch automatically when your computer boots up, instantly nudging you to bring over your data from other browsers.
The most recent update includes an auto-checked option to import browsing data from Chrome, such as history, bookmarks, and open tabs, in the name of users leveraging the features of AI assistant, Copilot. Although AI features may be appealing to some, the aggressive approach has left many users feeling annoyed rather than tempted.
The Verge recently noticed that when you start up your PC, Edge might decide to open on its own, promptly displaying a pop-up for its AI assistant, Copilot. Right next to Copilot, there’s a conveniently checked box allowing Edge to import data from other browsers automatically. For some users, this seems like an overreach, raising doubts about how far Microsoft is ready to go to make Edge the browser of choice.
Microsoft has confirmed this setup and stated that customers have the option to opt-out. Still, with default settings that favour data imports and an eye-catching import button, it’s easy for users to unintentionally make the switch, especially if they’re not paying attention. For those who prefer sticking with their existing browsers without interruption, the approach can feel unwelcome.
But even if users dodge the pop-ups, Edge isn’t exactly shy. Uninstalling it is a complex process, and it often gets reinstalled by Windows updates, much to the frustration of users who would rather go without. For many, this persistence feels more like a forceful sales pitch rather than a friendly suggestion.
Interestingly, this isn’t the first time Microsoft has tried this type of strategy. A similar message appeared to users earlier this year but was pulled back after strong objections. Now, it’s back, with Microsoft’s Caitlin Roulston stating the notification is meant to “give users the choice to import data from other browsers.”
In fact, Microsoft’s bold tactics go back some years. In 2022, it introduced a feature that could automatically pull data from Chrome into Edge – although users had the option to decline. In 2021, the company made it practically impossible to set any browser other than Edge as the default, resulting in enough outcry for Microsoft to back down.
While Microsoft promotes its intrusive pop-ups as a way to give users more control, others who value choice without constant nudges. The relentless push for Edge usage could actually be detrimental, as the company’s persistence may drive users toward other browsers rather than away. To truly compete, Microsoft might benefit from letting Edge’s strengths speak for themselves rather than relying on aggressive prompts to change hearts and minds.
(Photo by Surface)
See also: EU probes Microsoft-OpenAI and Google-Samsung AI deals
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Microsoft tries to convert Google Chrome users appeared first on AI News.
View the full article
Did you know that effective asset management practices pose challenges for almost half of small businesses? According to the latest research, 43% of businesses either manually report their inventory or in a few cases, do not record assets in any manner.
However, asset management is not immune to the disruptive pressure of artificial intelligence (AI) currently revolutionising numerous industries. The manner in which corporations manage their tangible and intangible assets is undergoing a profound transformation due to the evolving technology of AI. This blog will discover how AI-driven fixed asset software softwares transform asset management and what the future holds for businesses embedding those innovations.
Introduction to fixed asset management and AI
Fixed asset management is a critical feature for organisations to manage, control, and optimise the value of their physical assets. Assets can include everything from equipment and vehicles to home computer systems. Traditionally, manual asset management systems entail manual report maintenance and periodic audits, which can be time-consuming and susceptible to human error.
AI-driven fixed assets software offers a modern solution by automating diverse asset control factors. This guarantees accuracy, reduces administrative overhead, and increases an asset’s useful life, ultimately contributing to significant cost savings. AI, blended with the Internet of Things (IoT), machine learning (ML), and predictive analytics, is the primary method to develop smart, efficient, and scalable asset management solutions.
The predictive capacities of AI revolutionise proactive asset management. AI can predict when a piece of hardware is likely to fail or spot chances for optimisation by evaluating patterns and trends in data. The proactive strategy not only helps with strategic planning but also ensures the reliability of operations by preventing system outages that can cause serious disruptions to business operations and financial losses. Businesses may use AI to ensure their assets operate at peak efficiency, quickly adopt new technologies, and match operations to corporate goals.
AI’s advantages for fixed asset software
AI-driven fixed asset software has numerous advantages for businesses, particularly in sectors where asset management is vital to daily operations, like production, healthcare, and logistics.
Greater effectiveness: Automation significantly speeds up asset tracking, control, and upkeep. As AI can assess huge amounts of information in real time, managers can respond immediately to determine the state of their assets.
Cost savings: Ongoing asset utilisation and predictive analysis can result in lower operating costs. AI is capable of identifying underutilised or poorly functioning items, which may assist corporations in saving money by reallocating or disposal schedules.
Enhanced compliance and reporting: Staying compliant can be challenging with increasingly stringent regulatory governance. AI ensures that compliance reports are generated accurately and on time. Moreover, the software can routinely modify asset data to mirror regulatory changes, ensuring that companies consistently comply with laws.
Improved decision-making: With AI’s analytics capabilities, managers can make better choices about which assets to invest in, when to repair, and when to retire an asset. Selections are based on real-time information and predictive models instead of guesswork or manual calculations.
Case study: Predictive portfolio management precision issue:
Predicting market trends and real-time portfolio optimisation was complicated for a top asset management company. Conventional approaches could not keep up with market demands, resulting in lost opportunities and less-than-ideal results.
Solution:
The company was able to quickly evaluate large datasets by implementing an AI-powered predictive analytics system. The AI algorithms examined market patterns, assessed risk factors, and dynamically altered the portfolio. The end result was a notable improvement in portfolio performance and increased forecasting accuracy.
Findings:
A 20% boost in portfolio returns was attained.
Real-time market trend information improved decision-making.
The future of AI in asset management
The future of asset management will revolutionise customer satisfaction, operational effectiveness, and decision-making. Below are the important elements that will transform asset management operations:
1) Elevated decision making
By revealing hidden patterns from huge datasets, AI will permit asset managers to make better decisions. AI can evaluate the whole portfolio, compiling financial statistics and market news, which together will improve risk posture and portfolio formulation. AI will also make real-time adaptation feasible, preparing managers for future predictions and staying ahead of marketplace swings.
2) Automation and operational efficiency
Robo-advisors will become necessary tools, autonomously managing tasks like portfolio rebalancing and standard operations. AI’s algorithmic training will ******** decisions quickly, decreasing human intervention and cutting costs. AI will automate tedious back-office operations, including data entry and regulatory compliance procedures, ensuring smooth, streamlined workflows.
3) Client experience transformation
In the future, client interactions will become customised and more responsive. AI will analyse purchaser information to provide tailored funding recommendations, and AI-powered chatbots will be available 24/7 to answer queries. The technology can even simplify reporting, turning complex economic information into easily digestible, jargon-free insights, building trust and transparency in customer relationships.
Conclusion:
The future of asset management is undeniably tied to improvements in AI technology. AI-driven fixed asset software is already impacting asset monitoring, predictive analytics, and risk management by optimisation and automation. As hyper automation and IoT continue to adapt, the possibilities for remodeling asset management are limitless.
(Photo source)
The post Using AI technologies for future asset management appeared first on AI News.
View the full article
Japan is on a mission to become a global AI powerhouse, and it’s starting with some impressive advances in AI-driven language models. ********* technology experts are developing advanced models that grasp the unique nuances of the ********* language and culture—essential for industries such as healthcare, finance, and manufacturing – where precision is key.
But this effort isn’t Japan’s alone. Consulting giants like Accenture, Deloitte, EY Japan, FPT, Kyndryl, and TCS Japan are partnering with NVIDIA to create AI innovation hubs across the country. The centres are using NVIDIA’s AI software and specialised ********* language models to build tailored AI solutions, helping industries boost productivity in a digital workforce. The goal? To get ********* companies fully on board with enterprise and physical AI.
One standout technology supporting the drive is NVIDIA’s Omniverse platform. With Omniverse, ********* companies can create digital twins—virtual replicas of real-world assets—and test complex AI systems safely before implementing them. This is a game-changer for industries such as manufacturing and robotics, allowing businesses to fine-tune processes without the risk of real-world trial and error. This use of AI is more than just innovation; it represents Japan’s plan for addressing some major challenges ahead.
Japan faces a shrinking workforce presence as its population ages. With its strengths in robotics and automation, Japan is well-positioned to use AI solutions to bridge the gap. In fact, Japan’s government recently shared its vision of becoming “the world’s most AI-friendly country,” underscoring the perceived role AI will play in the nation’s future.
Supporting this commitment, Japan’s AI market hit $5.9 billion in value this year; a 31.2% growth rate according to IDC. New AI-focused consulting centres in Tokyo and Kansai give ********* businesses hands-on access to NVIDIA’s latest technologies, equipping them to solve social challenges and aid economic growth.
Top cloud providers like SoftBank, GMO Internet Group, KDDI, Highreso, Rutilea, and SAKURA Internet are also involved, working with NVIDIA to build AI infrastructure. Backed by Japan’s Ministry of Economy, Trade and Industry, they’re establishing AI data centres across Japan to accelerate growth in robotics, automotive, healthcare, and telecoms.
NVIDIA and SoftBank have also formed a remarkable partnership to build Japan’s most powerful AI supercomputer using NVIDIA’s Blackwell platform. Additionally, SoftBank has tested the world’s first AI and 5G hybrid telecoms network with NVIDIA’s AI Aerial platform, allowing Japan to set a worldwide standard. With these developments, Japan is taking big strides toward establishing itself as a leader in the AI-powered industrial revolution.
(Photo by Andrey Matveev)
See also: NVIDIA’s share price nosedives as antitrust clouds gather
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post NVIDIA AI Summit Japan: NVIDIA’s role in Japan’s big AI ambitions appeared first on AI News.
View the full article
AI continues to transform industries, and having the right skills can make a significant difference to your career. Professionals wishing to get into this evolving field can take advantage of a variety of specialised courses that teach how to use AI in business, creativity, and data analysis.
Artificial Intelligence: Preparing Your Career for AI
Artificial Intelligence: Preparing Your Career for AI is an option for those wanting to future-proof their careers in an AI-centric workplace. The course outlines five essential steps for preparing for AI’s impact on job roles and skill requirements. Participants learn the basics of AI, strategies for aligning their career paths with AI advancements, and how to use AI responsibly.
The course is ideal for individuals at any career stage who wish to understand AI’s impact on the job market and adapt proactively.
Generative AI for Leaders
For business leaders, Generative AI for Leaders focuses on integrating AI into organisation’s strategy. AI’s rapid advancement offers both opportunities and challenges for business leaders, who must balance innovation with ethical and operational concerns.
In this course, participants learn strategies for building AI-driven business initiatives and fostering collaboration, and learn how to address compliance and ethical considerations.
With a practical look at AI trends, this course prepares leaders to develop a culture that supports AI adoption and equips them with the tools needed to make informed decisions.
Business Analyst: Digital Director for AI and Data Science
Business Analyst: Digital Director for AI and Data Science is a course designed for business analysts and professionals explaining how to define requirements for data science and artificial intelligence projects. The course covers the requirements elicitation process for AI applications and teaches participants how to work closely with data scientists and machine learning engineers to ensure that AI projects meet business goals.
Learners gain insights into conversational AI tools, the differences between Natural Language Understanding (NLU) bots and rule-based bots, and best practices in conversation flow analysis. For business analysts, the course provides essential skills to guide AI initiatives that deliver real business value.
Prompt Engineering+: Master Speaking to AI
One valuable course is Prompt Engineering+: Master Speaking to AI, which teaches the art of creating precise instructions for generative AI models. ‘Prompt engineering’ is essential for situations in which human intent must be accurately translated into AI output.
The course covers prompt structure, including one-shot, few-shot, and zero-shot learning, as well as fundamental skills like natural language processing and Python programming. Students work with leading models including ChatGPT, Google Gemini, and DALL-E, and learn practical methods to refine and test prompts, control model output, and tackle inaccuracies.
For those looking to work directly with generative AI, this course provides a foundational skill set to optimise AI interactions.
Canva AI: Master Canva AI Tools and Apps 2024
Content creators can benefit from Canva AI: Master Canva AI Tools and Apps 2024, a course focused on using Canva’s AI-driven tools to streamline and enhance content production. This course introduces participants to Canva’s Magic Studio, where they explore tools for creating engaging social media posts, PDFs, videos, and presentations. From text-to-image conversions to speaking avatars, the course delves into AI tools that help creators produce content efficiently.
Through hands-on projects, learners experience Canva AI’s capabilities, enabling them to produce a wide variety of content quickly and effectively—a valuable skill for social media, marketing, and creative professionals.
Conclusion
These courses offer a comprehensive toolkit for mastering AI skills in various fields. Embracing these opportunities can empower professionals to lead, create, and adapt in an AI-driven organisations. Whether you’re a business leader, a content creator, or a data professional, investing in AI skills prepares you to navigate the future with confidence and purpose.
See also: Understanding AI’s impact on the workforce
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post AI courses to boost your skills and stay ahead appeared first on AI News.
View the full article
OpenAI is facing diminishing returns with its latest AI model while navigating the pressures of recent investments.
According to The Information, OpenAI’s next AI model – codenamed Orion – is delivering smaller performance gains compared to its predecessors.
In employee testing, Orion reportedly achieved the performance level of GPT-4 after completing just 20% of its training. However, the transition from GPT-4 to the anticipated GPT-5 is said to exhibit smaller quality improvements than the leap from GPT-3 to GPT-4.
“Some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks,” stated employees in the report. “Orion performs better at language tasks but may not outperform previous models at tasks such as coding, according to an OpenAI employee.”
Early stages of AI training usually yield the most significant improvements, while subsequent phases typically result in smaller performance gains. Consequently, the remaining 80% of training is unlikely to deliver advancements on par with previous generational improvements.
This situation with its latest AI model emerges at a pivotal time for OpenAI, following a recent funding round that saw the company raise $6.6 billion. With this financial backing comes increased expectations from investors, as well as technical challenges that complicate traditional scaling methodologies in AI development.
If these early versions do not meet expectations, OpenAI’s future fundraising prospects may not attract the same level of interest.
The limitations highlighted in the report underline a significant challenge confronting the entire AI industry: the diminishing availability of high-quality training data and the necessity to maintain relevance in an increasingly competitive field.
According to a paper (PDF) that was published in June, AI firms will deplete the pool of publicly available human-generated text data between 2026 and 2032. The Information notes that developers have “”largely squeezed as much out of” the data that has been used for enabling the rapid AI advancements we’ve seen in recent years.
To address these challenges, OpenAI is fundamentally rethinking its AI development strategy.
“In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law,” explains The Information.
As OpenAI navigates these challenges, the company must balance innovation with practical application and investor expectations. However, the ongoing exodus of leading figures from the company won’t help matters.
(Photo by Jukan Tateisi)
See also: ASI Alliance launches AIRIS that ‘learns’ in Minecraft
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post OpenAI faces diminishing returns with latest AI model appeared first on AI News.
View the full article
The Tony Blair Institute (TBI) has examined AI’s impact on the workforce. The report outlines AI’s potential to reshape work environments, boost productivity, and create opportunities—while warning of potential challenges ahead.
“Technology has a long history of profoundly reshaping the world of work,” the report begins.
From the agricultural revolution to the digital age, each wave of innovation has redefined labour markets. Today, AI presents a seismic shift, advancing rapidly and prompting policymakers to prepare for change.
Economic opportunities
The TBI report estimates that AI, when fully adopted by *** firms, could significantly increase productivity. It suggests that AI could save “almost a quarter of private-sector workforce time,” equivalent to the annual output of 6 million workers.
Most of these time savings are expected to stem from AI-enabled software performing cognitive tasks such as data analysis and routine administrative operations.
The report identifies sectors reliant on routine cognitive tasks, such as banking and finance, as those with significant exposure to AI. However, sectors like skilled trades or construction – which involve complex manual tasks – are likely to see less direct impact.
While AI can result in initial job losses, it also has the potential to create new demand by fostering economic growth and new industries.
The report expects these job losses can be balanced by new job creation. Over the years, technology has historically spurred new employment opportunities, as innovation leads to the development of new products and services.
Shaping future generations
AI’s potential extends into education, where it could assist both teachers and students.
The report suggests that AI could help “raise educational attainment by around six percent” on average. By personalising and supporting learning, AI has the potential to equalise access to opportunities and improve the quality of the workforce over time.
Health and wellbeing
Beyond education, AI offers potential benefits in healthcare, supporting a healthier workforce and reducing ******** costs.
The report highlights AI’s role in speeding medical research, enabling preventive healthcare, and helping those with disabilities re-enter the workforce.
Workplace transformation
The report acknowledges potential workplace challenges, such as increased monitoring and stress from AI tools. It stresses the importance of managing these technologies thoughtfully to “deliver a more engaging, inclusive and safe working environment.”
To mitigate potential disruption, the TBI outlines recommendations. These include upgrading labour-market infrastructure and utilising AI for job matching.
The report suggests creating an “Early Awareness and Opportunity System” to help workers understand the impact of AI on their jobs and provide advice on career paths.
Preparing for an AI-powered future
In light of the uncertainties surrounding AI’s impact on the workforce, the TBI urges policy changes to maximise benefits. Recommendations include incentivising AI adoption across industries, developing AI-pathfinder programmes, and creating challenge prizes to address public-sector labour shortages.
The report concludes that while AI presents risks, the potential gains are too significant to ignore.
Policymakers are encouraged to adopt a “pro-innovation” stance while being attuned to the risks, fostering an economy that is dynamic and resilient.
(Photo by Mimi Thian)
See also: Anthropic urges AI regulation to avoid catastrophes
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Understanding AI’s impact on the workforce appeared first on AI News.
View the full article
The intelligence displayed by generative AI chatbots like OpenAI’s ChatGPT has captured the imagination of individuals and corporations, and artificial intelligence has suddenly become the most exciting area of technology innovation.
AI has been recognised as a game changer, with potential to transform many aspects of our lives. From personalised medicine to autonomous vehicles, automated investments to digital assets, the possibilities enabled by AI seem endless.
But as transformational as AI will be, there are a lot of risks posed by this new technology. While fears about a malicious, Skynet-style AI system going rogue are misplaced, the dangers of AI centralisation are not. As companies like Microsoft, Google and Nvidia forge ahead in their pursuit of AI, fears about the concentration of power in the hands of just a few centralised players are becoming more pronounced.
Why should we worry about decentralised AI?
Monopoly power
The most pressing issue arising from centralised AI is the prospect of a few tech giants achieving monopolistic control over the industry. The big tech giants have already accumulated a very significant market share in AI, giving them possession of vast amounts of data. They also control the infrastructure that AI systems run on, enabling them to stifle their competitors, hobble innovation, and perpetuate economic inequality.
By achieving a monopoly over the development of AI, these companies are more likely to have an unfair influence on regulatory frameworks, which they can manipulate to their advantage. It will mean that smaller startups, which lack the enormous resources of big tech giants, will struggle to keep up with the pace of innovation. Those that do survive and look like they might thrive will almost certainly end up being acquired, further concentrating power in the hands of the few. The result will be less diversity in terms of AI development, fewer choices for consumers, and less favourable terms, limiting the use-cases and economic opportunities promised by AI.
Bias and Discrimination
Aside from monopolistic control, there are genuine fears around the bias of AI systems, and these concerns will take on more importance as society increasingly relies on AI.
The risk stems from the fact that organisations are becoming more reliant on automated systems to make decisions in many areas. It’s not unusual for a company to employ AI algorithms to filter job applicants, for example, and the risk is that a biased system could unfairly exclude a subset of candidates based on their ethnicity, age or location. AI is also used by insurance companies to set policy rates, by financial services firms to determine if someone qualifies for a loan and the amount of interest they’ll need to pay, and by law enforcement to determine which areas are more likely to see higher ******. In all of these use-cases, the potential implications of biased AI systems are extremely worrying.
Whether it’s law enforcement targeting ********* communities, discriminatory lending practices or something else, centralised AI can potentially exacerbate social inequality and enable systemic discrimination.
Privacy and surveillance
Another risk posed by centralised AI systems is the lack of privacy protections. When just a few big companies control the vast majority of data generated by AI, they gain the ability to carry out unprecedented surveillance on their users. The data accumulated by the most dominant AI platforms can be used to monitor, analyse and predict an individual’s behaviour with incredible accuracy, eroding privacy and increasing the potential for the information to be misused.
It’s of particular concern in countries with authoritarian governments, where data can be weaponised to create more sophisticated tools for monitoring citizens. But even in democratic societies, there is a threat posed by increased surveillance, as exemplified by the revelations of Edward Snowden about the US National Security Agency’s Prism program.
Corporations can also potentially misuse consumer’s data to increase their profits. In addition, when centralised entities accumulate vast amounts of sensitive data, this makes them more lucrative targets for hackers, increasing the risk of data leaks.
Security risks
Issues of national security can also arise due to centralised AI. For instance, there are justified fears that AI systems can be weaponised by nations, used to conduct cyberwarfare, engage in espionage, and develop new weapons systems. AI could become a key tool in future wars, raising the stakes in geopolitical conflicts.
AI systems themselves can also be targeted. As nations increase their reliance on AI, such systems will make for enticing targets, as they are obvious single points of ********. Take out an AI system and you could disrupt the entire traffic flow of cities, take down electrical grids, and more.
Ethics
The other major concern of centralised AI is about ethics. That’s because the handful of companies that control AI systems would gain substantial influence over a society’s cultural norms and values, and might often prioritise profit, creating further ethical concerns.
For example, AI algorithms are already being used widely by social media platforms to moderate content, in an attempt to identify and filter out offensive posts. The worry is that algorithms, either by accident or design, might end up suppressing free speech.
There is already controversy about the effectiveness of AI-powered moderation systems, with numerous seemingly innocuous posts being blocked or taken down by automated algorithms. This leads to speculation that such systems are not broken but being manipulated behind the scenes based on the political narrative the platform is trying to promote.
The alternative? Decentralised AI
The only logical counterweight to centralised AI is the development of decentralised AI systems that ensure that control of the technology ******** in the hands of the majority, rather than the few. By doing this, we can ensure that no single company or entity gains a significant influence over the direction of AI’s development.
When the development and governance of AI is shared by thousands or millions of entities, its progress will be more equitable, with greater alignment to the needs of the individual. The result will be more diverse AI applications, with an almost endless selection of models used by different systems, instead of a few models that dominate the industry.
Decentralised AI systems will also mean checks and balances against the risk of mass surveillance and manipulation of data. Whereas centralised AI can be weaponised and used in a way that’s contrary to the interests of the many, decentralised AI hedges against this kind of oppression.
The main advantage of decentralised AI is that everyone is in control over the technology’s evolution, preventing any single entity from gaining an outsized influence over its development.
How to decentralise AI
Decentralised AI involves a rethink of the layers that make up the AI technology stack, including elements like the infrastructure (compute and networking resources), the data, models, training, inference, and fine-tuning processes.
We can’t just put our hopes in open-source models if the underlying infrastructure ******** fully centralised by cloud computing giants like Amazon, Microsoft and Google, for instance. We need to ensure that every aspect of AI is decentralised
The best way to decentralise the AI stack is to break it down into modular components and create markets around them based on supply and demand. One such example of how this can work is Spheron, which has created a Decentralised Physical Infrastructure Network (DePIN) that anyone can participate in.
With Spheron’s DePIN, everyone is free to share their underutilised computing resources, essentially renting them out to those who need infrastructure to host their AI applications. So, a graphic designer who uses a powerful laptop with a GPU can donate processing power to the DePIN when they’re not using it for their own work, and be rewarded with token incentives.
What this means is that the AI infrastructure layer becomes widely distributed and decentralised, with no single provider in control. It’s enabled by blockchain technology and smart contracts, which provide transparency, immutability and automation.
DePIN can also work for open-source models and underlying data. For instance, it’s possible to share training datasets on a decentralised network like Qubic, which will make sure the provider of that data is rewarded each time their information is accessed by an AI system.
To ensure access and permissions are decentralised, every part of the technology stack is distributed in this way. However, the AI industry currently struggles to provide such a level of decentralisation. Although open-source models have become extremely popular among AI developers, most people continue to rely on proprietary cloud networks, meaning the training and inference processes are heavily centralised.
But there are strong incentives for decentralisation to win out. One of the primary advantages of DePIN networks, for example, is that they help to reduce overheads. Because networks like Spheron don’t rely on intermediaires, participants don’t need to make any payments or share revenue with third-parties. Moreover, they can afford to be more competitive in terms of pricing than corporations that are under pressure to grow profitability.
Decentralisation must win
The future of AI holds a lot of potential, but it’s also perilous. While the capabilities of AI systems have improved dramatically in the last few years, most of the advances have been made by all-powerful companies and that has resulted in an increase in their influence over the industry. There’s a price to pay for this, not just in monetary terms.
The only reasonable alternative is to promote the greater adoption of decentralised AI, which can enhance accessibility and ensure a greater flexibility of AI. By allowing everyone to participate in the development of AI on an equal footing, we’ll see more diverse, interesting, and useful applications that can benefit everyone equally, as well as putting their users first.
Building a decentralised AI future will involve a great deal of coordination and collaboration across every layer of the AI stack. Fortunately, there are strong incentives for participants to do just that. And again, the incentives are not just monetary.
The post Centralised AI is dangerous: how can we stop it? appeared first on AI News.
View the full article
The ASI Alliance has introduced AIRIS (Autonomous Intelligent Reinforcement Inferred Symbolism) that “learns” within the popular game, Minecraft.
AIRIS represents the first proto-AGI (Artificial General Intelligence) to harness a comprehensive tech stack across the alliance.
SingularityNET, founded by renowned AI researcher Dr Ben Goertzel, uses agent technology from Fetch.ai, incorporates Ocean Data for long-term memory capabilities, and is soon expected to integrate CUDOS Compute infrastructure for scalable processing power.
“AIRIS is a significant step in the direction of practical, scalable neural-symbolic learning, and – alongside its already powerful and valuable functionality – it illustrates several general points about neural-symbolic systems, such as their ability to learn precise generalisable conclusions from small amounts of data,” explains Goertzel.
According to the company, this alliance-driven procedure propels AIRIS towards AGI—crafting one of the first intelligent systems with autonomous and adaptive learning that holds practical applications for real-world scenarios.
AIRIS’ learning mechanisms
AIRIS is crafted to enhance its understanding by interacting directly with its environment, venturing beyond the traditional AI limitations that depend on predefined rules or vast datasets. Instead, AIRIS evolves through observation, experimentation, and continual refinement of its unique “rule set.”
This system facilitates a profound level of problem-solving and contextual comprehension, with its implementation in Minecraft setting a new benchmark for AI interaction with both digital and tangible landscapes.
pic.twitter.com/jTeQFulzFJ — Artificial Superintelligence Alliance (@ASI_Alliance) November 5, 2024
Shifting from a controlled 2D grid to the sophisticated 3D world of Minecraft, AIRIS faced numerous challenges—including terrain navigation and adaptive problem-solving in a dynamic environment. This transition underscores AIRIS’ autonomy in navigation, exploration, and learning.
The AIRIS Minecraft Agent distinguishes itself from other AI entities through several key features:
Dynamic navigation: AIRIS initially evaluates its milieu to formulate movement strategies, adapting to new environments in real-time. Its capabilities include manoeuvring around obstacles, jumping over barriers, and anticipating reactions to varied terrains.
Obstacle adaptation: It learns to navigate around impediments like cliffs and forested areas, refining its rule set with every new challenge to avoid redundant errors and minimise needless trial-and-error efforts.
Efficient pathfinding: Via continuous optimisation, AIRIS advances from initially complex navigation paths to streamlined, direct routes as it “comprehends” Minecraft dynamics.
Real-time environmental adaptation: Contrasting with conventional reinforcement learning systems that demand extensive retraining for new environments, AIRIS adapts immediately to unfamiliar regions, crafting new rules based on partial observations dynamically.
AIRIS’ adeptness in dealing with fluctuating terrains, including water bodies and ***** systems, introduces sophisticated rule refinement founded on hands-on experience. Additionally, AIRIS boasts optimised computational efficiency—enabling real-time management of complex rules without performance compromises.
Future applications
Minecraft serves as an excellent launchpad for AIRIS’ prospective applications, establishing a solid foundation for expansive implementations:
Enhanced object interaction: Forthcoming stages will empower AIRIS to engage more profoundly with its surroundings, improving capabilities in object manipulation, construction, and even crafting. This development will necessitate AIRIS to develop a more refined decision-making framework for contextual tasks.
Social AI collaboration: Plans are underway to incorporate AIRIS in multi-agent scenarios, where agents learn, interact, and fulfil shared objectives, simulating real-world social dynamics and problem-solving collaboratively.
Abstract and strategic reasoning: Expanded developments will enhance AIRIS’s reasoning, enabling it to tackle complex goals such as resource management and prioritisation, moving beyond basic navigation towards strategic gameplay.
The transition of AIRIS to 3D environments signifies a pivotal advancement in the ASI Alliance’s mission to cultivate AGI. Through AIRIS’s achievements in navigating and learning within Minecraft, the ASI Alliance aspires to expedite its deployment in the real world, pioneering applications for autonomous robots, intelligent home assistants, and other systems requiring adaptive learning and problem-solving capacities.
Berick Cook, AI Developer at SingularityNET and creator of AIRIS, said: “AIRIS is a whole new way of approaching the problem of machine learning. We are only just beginning to explore its capabilities. We are excited to see how we can apply it to problems that have posed a significant challenge for traditional reinforcement learning.
“The most important aspect of AIRIS to me is its transparency and explainability. Moving away from ‘****** Box’ AI represents a significant leap forward in the pursuit of safe, ethical, and beneficial AI.”
The innovative approach to AI evident in AIRIS – emphasising self-directed learning and continuous rule refinement – lays the foundation for AI systems capable of independent functioning in unpredictable real-world environments. Minecraft’s intricate ecosystem enables the system to hone its skills within a controlled yet expansive virtual setting, effectively bridging the divide between simulation and reality.
The AIRIS Minecraft Agent represents the inaugural tangible step towards an AI that learns from, adapts to and makes autonomous decisions about its environment. This accomplishment illustrates the potential of such technology to re-envision AI’s role across various industries.
(Image by SkyeWeste)
See also: SingularityNET bets on supercomputer network to deliver AGI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post ASI Alliance launches AIRIS that ‘learns’ in Minecraft appeared first on AI News.
View the full article
The combination of artificial intelligence and policymaking can occasionally have unforeseen repercussions, as seen recently in Alaska.
In an unusual turn of events, Alaska legislators reportedly used AI-generated citations that were inaccurate to justify a proposed policy banning cellphones in schools. As reported by /The Alaska Beacon/, Alaska’s Department of Education and Early Development (DEED) presented a policy draft containing references to academic studies that simply did not exist.
The situation arose when Alaska’s Education Commissioner, Deena Bishop, used generative AI to draft the cellphone policy. The document produced by the AI included supposed scholarly references that were neither verified nor accurate, yet the document did not disclose the use of AI in its preparation. Some of the AI-generated content reached the Alaska State Board of Education and Early Development before it could be reviewed, potentially influencing board discussions.
Commissioner Bishop later claimed that AI was used only to “create citations” for an initial draft and asserted that she corrected the errors before the meeting by sending updated citations to board members. However, AI “hallucinations”—fabricated information generated when AI attempts to create plausible yet unverified content—were still present in the final document that was voted on by the board.
The final resolution, published on DEED’s website, directs the department to establish a model policy for cellphone restrictions in schools. Unfortunately, the document included six citations, four of which seemed to be from respected scientific journals. However, the references were entirely made up, with URLs that led to unrelated content. The incident shows the risks of using AI-generated data without proper human verification, especially when making policy rulings.
Alaska’s case is not one of a kind. AI hallucinations are increasingly common in a variety of professional sectors. For example, some legal professionals have faced consequences for using AI-generated, fictitious case citations in court. Similarly, academic papers created using AI have included distorted data and fake sources, presenting serious credibility concerns. When left unchecked, generative AI algorithms, which are meant to produce content based on patterns rather than factual accuracy, can easily produce misleading citations.
The reliance on AI-generated data in policymaking, particularly in education, carries significant risks. When policies are developed based on fabricated information, they may misallocate resources and potentially harm students. For instance, a policy restricting cellphone use based on fabricated data may divert attention from more effective, evidence-based interventions that could genuinely benefit students.
Furthermore, using unverified AI data can erode public trust in both the policymaking process and AI technology itself. Such incidents underscore the importance of fact-checking, transparency, and caution when using AI in sensitive decision-making areas, especially in education, where impact on students can be profound.
Alaska officials attempted to downplay the situation, referring to the fabricated citations as “placeholders” intended for later correction. However, the document with the “placeholders” was still presented to the board and used as the basis for a vote, underscoring the need for rigorous oversight when using AI.
(Photo by Hartono Creative Studio)
See also: Anthropic urges AI regulation to avoid catastrophes
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post AI hallucinations gone wrong as Alaska uses fake stats in policy appeared first on AI News.
View the full article
A global study has unveiled a stark contrast in attitudes towards embracing in-vehicle AI between Eastern and Western markets, with ********* drivers particularly reluctant.
The research – conducted by MHP – surveyed 4,700 car drivers across China, the US, Germany, the ***, Italy, Sweden, and Poland, revealing significant geographical disparities in AI acceptance and understanding.
According to the study, while AI is becoming integral to modern vehicles, ********* consumers remain hesitant about its implementation and value proposition.
Regional disparities
The study found that 48 percent of ******** respondents view in-car AI predominantly as an opportunity, while merely 23 percent of ********* respondents share this optimistic outlook. In Europe, 39 percent believe AI’s opportunities and risks are broadly balanced, while 24 percent take a negative stance, suggesting the risks outweigh potential benefits.
Understanding of AI technology also varies significantly by region. While over 80 percent of ******** respondents claim to understand AI’s use in cars, this figure drops to just 54 percent among ********* drivers, highlighting a notable knowledge gap.
Marcus Willand, Partner at MHP and one of the study’s authors, notes: “The figures show that the prospect of greater safety and comfort due to AI can motivate purchasing decisions. However, the ********* respondents in particular are often hesitant and price-sensitive.”
The willingness to pay for AI features shows an equally stark divide. Just 23 percent of ********* drivers expressed willingness to pay for AI functions, compared to 39 percent of ******** drivers. The study suggests that most users now expect AI features to be standard rather than optional extras.
Dr Nils Schaupensteiner, Associated Partner at MHP and study co-author, said: “Automotive companies need to create innovations with clear added value and develop both direct and indirect monetisation of their AI offerings, for example through data-based business models and improved services.”
In-vehicle AI opportunities
Despite these challenges, traditional automotive manufacturers maintain a trust advantage over tech giants. The study reveals that 64 percent of customers trust established car manufacturers with AI implementation, compared to 50 percent for technology firms like Apple, Google, and Microsoft.
The research identified several key areas where AI could provide significant value across the automotive industry’s value chain, including pattern recognition for quality management, enhanced data management capabilities, AI-driven decision-making systems, and improved customer service through AI-powered communication tools.
“It is worth OEMs and suppliers considering the opportunities offered by the new technology along their entire value chain,” explains Augustin Friedel, Senior Manager and study co-author. “However, the possible uses are diverse and implementation is quite complex.”
The study reveals that while up to 79 percent of respondents express interest in AI-powered features such as driver assistance systems, intelligent route planning, and predictive maintenance, manufacturers face significant challenges in monetising these capabilities, particularly in the ********* market.
See also: MIT breakthrough could transform ****** training
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Western drivers remain sceptical of in-vehicle AI appeared first on AI News.
View the full article
Important Information
Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.